Test Report: QEMU_macOS 19446

                    
                      68089f2e899ecb1db727fde03c1d4991123fd325:2024-08-14:35784
                    
                

Test fail (97/274)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 20.26
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 10.04
46 TestCertOptions 10.18
47 TestCertExpiration 196.21
48 TestDockerFlags 12.26
49 TestForceSystemdFlag 12.44
50 TestForceSystemdEnv 10.04
95 TestFunctional/parallel/ServiceCmdConnect 38.6
167 TestMultiControlPlane/serial/StopSecondaryNode 214.14
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 102.85
169 TestMultiControlPlane/serial/RestartSecondaryNode 183.77
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 234.38
172 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 1.07
174 TestMultiControlPlane/serial/StopCluster 202.09
175 TestMultiControlPlane/serial/RestartCluster 5.25
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
177 TestMultiControlPlane/serial/AddSecondaryNode 0.07
181 TestImageBuild/serial/Setup 9.95
184 TestJSONOutput/start/Command 9.89
190 TestJSONOutput/pause/Command 0.08
196 TestJSONOutput/unpause/Command 0.05
213 TestMinikubeProfile 10.05
216 TestMountStart/serial/StartWithMountFirst 9.93
219 TestMultiNode/serial/FreshStart2Nodes 10
220 TestMultiNode/serial/DeployApp2Nodes 93.54
221 TestMultiNode/serial/PingHostFrom2Pods 0.09
222 TestMultiNode/serial/AddNode 0.07
223 TestMultiNode/serial/MultiNodeLabels 0.06
224 TestMultiNode/serial/ProfileList 0.07
225 TestMultiNode/serial/CopyFile 0.06
226 TestMultiNode/serial/StopNode 0.14
227 TestMultiNode/serial/StartAfterStop 43
228 TestMultiNode/serial/RestartKeepsNodes 8.93
229 TestMultiNode/serial/DeleteNode 0.1
230 TestMultiNode/serial/StopMultiNode 2.25
231 TestMultiNode/serial/RestartMultiNode 5.25
232 TestMultiNode/serial/ValidateNameConflict 20.08
236 TestPreload 10.11
238 TestScheduledStopUnix 9.99
239 TestSkaffold 13.22
242 TestRunningBinaryUpgrade 610.15
244 TestKubernetesUpgrade 18.76
258 TestStoppedBinaryUpgrade/Upgrade 587.68
268 TestPause/serial/Start 9.95
271 TestNoKubernetes/serial/StartWithK8s 10.63
272 TestNoKubernetes/serial/StartWithStopK8s 7.55
273 TestNoKubernetes/serial/Start 7.51
274 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 2.12
275 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.36
279 TestNoKubernetes/serial/StartNoArgs 5.36
281 TestNetworkPlugins/group/auto/Start 9.85
282 TestNetworkPlugins/group/flannel/Start 9.98
283 TestNetworkPlugins/group/enable-default-cni/Start 9.96
284 TestNetworkPlugins/group/kindnet/Start 9.82
285 TestNetworkPlugins/group/bridge/Start 9.92
286 TestNetworkPlugins/group/kubenet/Start 9.8
287 TestNetworkPlugins/group/custom-flannel/Start 9.83
288 TestNetworkPlugins/group/calico/Start 9.81
289 TestNetworkPlugins/group/false/Start 9.82
291 TestStartStop/group/old-k8s-version/serial/FirstStart 10.1
292 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
293 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
296 TestStartStop/group/old-k8s-version/serial/SecondStart 5.26
297 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
298 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
299 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
300 TestStartStop/group/old-k8s-version/serial/Pause 0.1
302 TestStartStop/group/no-preload/serial/FirstStart 9.95
303 TestStartStop/group/no-preload/serial/DeployApp 0.09
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
307 TestStartStop/group/no-preload/serial/SecondStart 5.24
308 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
309 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
310 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
311 TestStartStop/group/no-preload/serial/Pause 0.1
313 TestStartStop/group/embed-certs/serial/FirstStart 10.09
314 TestStartStop/group/embed-certs/serial/DeployApp 0.09
315 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
318 TestStartStop/group/embed-certs/serial/SecondStart 5.62
320 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 10.01
321 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
322 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
323 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
324 TestStartStop/group/embed-certs/serial/Pause 0.1
326 TestStartStop/group/newest-cni/serial/FirstStart 9.94
327 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
331 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.81
336 TestStartStop/group/newest-cni/serial/SecondStart 5.25
337 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
338 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
339 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
340 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
344 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (20.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-622000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-622000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (20.262457833s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5d8b720a-528a-424d-b2e8-0a77b4f786ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-622000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"36ae795c-fe7c-43a5-9e95-98bab51f1881","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19446"}}
	{"specversion":"1.0","id":"98bcedfd-39c0-4a26-9d5e-e427211be3d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig"}}
	{"specversion":"1.0","id":"67e36be7-b2e4-4e2d-aad6-337f820a404d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"07be7197-0d9e-4d25-8798-ddcce45df542","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"968e2563-da04-4303-a2b2-57d5a939c7c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube"}}
	{"specversion":"1.0","id":"f1457595-8097-4771-9130-34702041c309","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"04d8a405-a1fd-4825-b841-18bcc686e071","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"85b83358-245e-4501-b057-124bcf475a67","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"157cfed0-63ef-4e9e-b4cf-b89694d49137","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"92b892f2-a15c-467e-ab90-70551953f70d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-622000\" primary control-plane node in \"download-only-622000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"862edde3-085b-4ab6-b3e9-f576ee6db797","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"c29d9bb0-3c19-4ca6-9e9f-6cf4c2e36473","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19446-1067/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108bcb920 0x108bcb920 0x108bcb920 0x108bcb920 0x108bcb920 0x108bcb920 0x108bcb920] Decompressors:map[bz2:0x14000619090 gz:0x14000619098 tar:0x14000619040 tar.bz2:0x14000619050 tar.gz:0x14000619060 tar.xz:0x14000619070 tar.zst:0x14000619080 tbz2:0x14000619050 tgz:0x14
000619060 txz:0x14000619070 tzst:0x14000619080 xz:0x140006190a0 zip:0x140006190b0 zst:0x140006190a8] Getters:map[file:0x14000062880 http:0x1400068e320 https:0x1400068e370] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"b5462929-9254-4e2f-bc78-2e0d5b5be7f3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:09:06.011850    1602 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:09:06.012023    1602 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:09:06.012026    1602 out.go:304] Setting ErrFile to fd 2...
	I0814 09:09:06.012029    1602 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:09:06.012169    1602 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	W0814 09:09:06.012253    1602 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19446-1067/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19446-1067/.minikube/config/config.json: no such file or directory
	I0814 09:09:06.013521    1602 out.go:298] Setting JSON to true
	I0814 09:09:06.031808    1602 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":503,"bootTime":1723651243,"procs":504,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0814 09:09:06.031880    1602 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0814 09:09:06.039466    1602 out.go:97] [download-only-622000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0814 09:09:06.039634    1602 notify.go:220] Checking for updates...
	W0814 09:09:06.039682    1602 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball: no such file or directory
	I0814 09:09:06.043425    1602 out.go:169] MINIKUBE_LOCATION=19446
	I0814 09:09:06.050465    1602 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	I0814 09:09:06.056503    1602 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0814 09:09:06.060391    1602 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 09:09:06.063434    1602 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	W0814 09:09:06.069438    1602 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0814 09:09:06.069682    1602 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 09:09:06.074439    1602 out.go:97] Using the qemu2 driver based on user configuration
	I0814 09:09:06.074470    1602 start.go:297] selected driver: qemu2
	I0814 09:09:06.074493    1602 start.go:901] validating driver "qemu2" against <nil>
	I0814 09:09:06.074607    1602 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0814 09:09:06.078469    1602 out.go:169] Automatically selected the socket_vmnet network
	I0814 09:09:06.085218    1602 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0814 09:09:06.085311    1602 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0814 09:09:06.085399    1602 cni.go:84] Creating CNI manager for ""
	I0814 09:09:06.085423    1602 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0814 09:09:06.085479    1602 start.go:340] cluster config:
	{Name:download-only-622000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-622000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 09:09:06.090945    1602 iso.go:125] acquiring lock: {Name:mkda50f549908e1040ce4e19ea38376a2f640f3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:09:06.095420    1602 out.go:97] Downloading VM boot image ...
	I0814 09:09:06.095435    1602 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso
	I0814 09:09:14.926864    1602 out.go:97] Starting "download-only-622000" primary control-plane node in "download-only-622000" cluster
	I0814 09:09:14.926897    1602 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0814 09:09:14.989182    1602 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0814 09:09:14.989191    1602 cache.go:56] Caching tarball of preloaded images
	I0814 09:09:14.989386    1602 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0814 09:09:14.994426    1602 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0814 09:09:14.994433    1602 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0814 09:09:15.084448    1602 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0814 09:09:25.127013    1602 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0814 09:09:25.127195    1602 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0814 09:09:25.826672    1602 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0814 09:09:25.826888    1602 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/download-only-622000/config.json ...
	I0814 09:09:25.826907    1602 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/download-only-622000/config.json: {Name:mk45d6afe9bef05848dff417b6d0ed76463e3de4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:09:25.827157    1602 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0814 09:09:25.827397    1602 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0814 09:09:26.198437    1602 out.go:169] 
	W0814 09:09:26.202335    1602 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19446-1067/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108bcb920 0x108bcb920 0x108bcb920 0x108bcb920 0x108bcb920 0x108bcb920 0x108bcb920] Decompressors:map[bz2:0x14000619090 gz:0x14000619098 tar:0x14000619040 tar.bz2:0x14000619050 tar.gz:0x14000619060 tar.xz:0x14000619070 tar.zst:0x14000619080 tbz2:0x14000619050 tgz:0x14000619060 txz:0x14000619070 tzst:0x14000619080 xz:0x140006190a0 zip:0x140006190b0 zst:0x140006190a8] Getters:map[file:0x14000062880 http:0x1400068e320 https:0x1400068e370] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0814 09:09:26.202361    1602 out_reason.go:110] 
	W0814 09:09:26.211367    1602 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 09:09:26.216252    1602 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-622000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (20.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19446-1067/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.04s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-556000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-556000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.889787917s)

                                                
                                                
-- stdout --
	* [offline-docker-556000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19446
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-556000" primary control-plane node in "offline-docker-556000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-556000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:45:30.510333    3811 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:45:30.510475    3811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:45:30.510479    3811 out.go:304] Setting ErrFile to fd 2...
	I0814 09:45:30.510482    3811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:45:30.510623    3811 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:45:30.511787    3811 out.go:298] Setting JSON to false
	I0814 09:45:30.529677    3811 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2687,"bootTime":1723651243,"procs":499,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0814 09:45:30.529791    3811 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0814 09:45:30.533664    3811 out.go:177] * [offline-docker-556000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0814 09:45:30.540486    3811 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 09:45:30.540512    3811 notify.go:220] Checking for updates...
	I0814 09:45:30.546438    3811 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	I0814 09:45:30.549453    3811 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0814 09:45:30.552486    3811 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 09:45:30.555395    3811 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	I0814 09:45:30.558449    3811 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 09:45:30.561857    3811 config.go:182] Loaded profile config "multinode-157000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:45:30.561914    3811 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 09:45:30.564474    3811 out.go:177] * Using the qemu2 driver based on user configuration
	I0814 09:45:30.571504    3811 start.go:297] selected driver: qemu2
	I0814 09:45:30.571513    3811 start.go:901] validating driver "qemu2" against <nil>
	I0814 09:45:30.571519    3811 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 09:45:30.573497    3811 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0814 09:45:30.574686    3811 out.go:177] * Automatically selected the socket_vmnet network
	I0814 09:45:30.577643    3811 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 09:45:30.577681    3811 cni.go:84] Creating CNI manager for ""
	I0814 09:45:30.577689    3811 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0814 09:45:30.577699    3811 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0814 09:45:30.577738    3811 start.go:340] cluster config:
	{Name:offline-docker-556000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-556000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 09:45:30.581464    3811 iso.go:125] acquiring lock: {Name:mkda50f549908e1040ce4e19ea38376a2f640f3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:45:30.584512    3811 out.go:177] * Starting "offline-docker-556000" primary control-plane node in "offline-docker-556000" cluster
	I0814 09:45:30.592523    3811 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0814 09:45:30.592544    3811 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0814 09:45:30.592553    3811 cache.go:56] Caching tarball of preloaded images
	I0814 09:45:30.592616    3811 preload.go:172] Found /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0814 09:45:30.592622    3811 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0814 09:45:30.592685    3811 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/offline-docker-556000/config.json ...
	I0814 09:45:30.592695    3811 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/offline-docker-556000/config.json: {Name:mk148b61d5af13581bf148659330c0af2a13e070 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:45:30.592973    3811 start.go:360] acquireMachinesLock for offline-docker-556000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:45:30.593009    3811 start.go:364] duration metric: took 25µs to acquireMachinesLock for "offline-docker-556000"
	I0814 09:45:30.593025    3811 start.go:93] Provisioning new machine with config: &{Name:offline-docker-556000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-556000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0814 09:45:30.593061    3811 start.go:125] createHost starting for "" (driver="qemu2")
	I0814 09:45:30.597438    3811 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0814 09:45:30.613179    3811 start.go:159] libmachine.API.Create for "offline-docker-556000" (driver="qemu2")
	I0814 09:45:30.613216    3811 client.go:168] LocalClient.Create starting
	I0814 09:45:30.613288    3811 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem
	I0814 09:45:30.613318    3811 main.go:141] libmachine: Decoding PEM data...
	I0814 09:45:30.613327    3811 main.go:141] libmachine: Parsing certificate...
	I0814 09:45:30.613367    3811 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/cert.pem
	I0814 09:45:30.613390    3811 main.go:141] libmachine: Decoding PEM data...
	I0814 09:45:30.613397    3811 main.go:141] libmachine: Parsing certificate...
	I0814 09:45:30.613741    3811 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19446-1067/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0814 09:45:30.765903    3811 main.go:141] libmachine: Creating SSH key...
	I0814 09:45:30.863065    3811 main.go:141] libmachine: Creating Disk image...
	I0814 09:45:30.863074    3811 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0814 09:45:30.863235    3811 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/offline-docker-556000/disk.qcow2.raw /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/offline-docker-556000/disk.qcow2
	I0814 09:45:30.872920    3811 main.go:141] libmachine: STDOUT: 
	I0814 09:45:30.872946    3811 main.go:141] libmachine: STDERR: 
	I0814 09:45:30.872998    3811 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/offline-docker-556000/disk.qcow2 +20000M
	I0814 09:45:30.881268    3811 main.go:141] libmachine: STDOUT: Image resized.
	
	I0814 09:45:30.881298    3811 main.go:141] libmachine: STDERR: 
	I0814 09:45:30.881329    3811 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/offline-docker-556000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/offline-docker-556000/disk.qcow2
	I0814 09:45:30.881336    3811 main.go:141] libmachine: Starting QEMU VM...
	I0814 09:45:30.881345    3811 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:45:30.881370    3811 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/offline-docker-556000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/offline-docker-556000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/offline-docker-556000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:96:bf:c5:15:69 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/offline-docker-556000/disk.qcow2
	I0814 09:45:30.883199    3811 main.go:141] libmachine: STDOUT: 
	I0814 09:45:30.883219    3811 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:45:30.883237    3811 client.go:171] duration metric: took 270.026209ms to LocalClient.Create
	I0814 09:45:32.884221    3811 start.go:128] duration metric: took 2.29123625s to createHost
	I0814 09:45:32.884232    3811 start.go:83] releasing machines lock for "offline-docker-556000", held for 2.291299583s
	W0814 09:45:32.884245    3811 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:45:32.898734    3811 out.go:177] * Deleting "offline-docker-556000" in qemu2 ...
	W0814 09:45:32.909472    3811 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:45:32.909481    3811 start.go:729] Will try again in 5 seconds ...
	I0814 09:45:37.911608    3811 start.go:360] acquireMachinesLock for offline-docker-556000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:45:37.912017    3811 start.go:364] duration metric: took 308.083µs to acquireMachinesLock for "offline-docker-556000"
	I0814 09:45:37.912148    3811 start.go:93] Provisioning new machine with config: &{Name:offline-docker-556000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-556000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0814 09:45:37.912470    3811 start.go:125] createHost starting for "" (driver="qemu2")
	I0814 09:45:37.930007    3811 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0814 09:45:37.979714    3811 start.go:159] libmachine.API.Create for "offline-docker-556000" (driver="qemu2")
	I0814 09:45:37.979766    3811 client.go:168] LocalClient.Create starting
	I0814 09:45:37.979887    3811 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem
	I0814 09:45:37.979948    3811 main.go:141] libmachine: Decoding PEM data...
	I0814 09:45:37.979964    3811 main.go:141] libmachine: Parsing certificate...
	I0814 09:45:37.980034    3811 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/cert.pem
	I0814 09:45:37.980079    3811 main.go:141] libmachine: Decoding PEM data...
	I0814 09:45:37.980089    3811 main.go:141] libmachine: Parsing certificate...
	I0814 09:45:37.980743    3811 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19446-1067/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0814 09:45:38.147052    3811 main.go:141] libmachine: Creating SSH key...
	I0814 09:45:38.298074    3811 main.go:141] libmachine: Creating Disk image...
	I0814 09:45:38.298079    3811 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0814 09:45:38.298274    3811 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/offline-docker-556000/disk.qcow2.raw /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/offline-docker-556000/disk.qcow2
	I0814 09:45:38.308098    3811 main.go:141] libmachine: STDOUT: 
	I0814 09:45:38.308120    3811 main.go:141] libmachine: STDERR: 
	I0814 09:45:38.308175    3811 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/offline-docker-556000/disk.qcow2 +20000M
	I0814 09:45:38.316120    3811 main.go:141] libmachine: STDOUT: Image resized.
	
	I0814 09:45:38.316135    3811 main.go:141] libmachine: STDERR: 
	I0814 09:45:38.316146    3811 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/offline-docker-556000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/offline-docker-556000/disk.qcow2
	I0814 09:45:38.316152    3811 main.go:141] libmachine: Starting QEMU VM...
	I0814 09:45:38.316166    3811 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:45:38.316198    3811 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/offline-docker-556000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/offline-docker-556000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/offline-docker-556000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:e3:80:02:f6:a6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/offline-docker-556000/disk.qcow2
	I0814 09:45:38.317730    3811 main.go:141] libmachine: STDOUT: 
	I0814 09:45:38.317747    3811 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:45:38.317760    3811 client.go:171] duration metric: took 338.000208ms to LocalClient.Create
	I0814 09:45:40.319853    3811 start.go:128] duration metric: took 2.407426791s to createHost
	I0814 09:45:40.319903    3811 start.go:83] releasing machines lock for "offline-docker-556000", held for 2.407947042s
	W0814 09:45:40.320243    3811 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-556000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-556000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:45:40.339879    3811 out.go:177] 
	W0814 09:45:40.348928    3811 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0814 09:45:40.348979    3811 out.go:239] * 
	* 
	W0814 09:45:40.351019    3811 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 09:45:40.359709    3811 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-556000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-08-14 09:45:40.371188 -0700 PDT m=+2194.526242793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-556000 -n offline-docker-556000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-556000 -n offline-docker-556000: exit status 7 (50.672083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-556000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-556000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-556000
--- FAIL: TestOffline (10.04s)

                                                
                                    
x
+
TestCertOptions (10.18s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-392000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
E0814 09:57:01.078675    1600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/addons-937000/client.crt: no such file or directory" logger="UnhandledError"
E0814 09:57:02.956550    1600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/functional-363000/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-392000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.910587875s)

                                                
                                                
-- stdout --
	* [cert-options-392000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19446
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-392000" primary control-plane node in "cert-options-392000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-392000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-392000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-392000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-392000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-392000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (79.216291ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-392000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-392000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-392000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-392000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-392000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-392000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (40.899333ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-392000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-392000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-392000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-392000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-392000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-08-14 09:57:09.711369 -0700 PDT m=+2883.940698959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-392000 -n cert-options-392000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-392000 -n cert-options-392000: exit status 7 (30.619708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-392000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-392000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-392000
--- FAIL: TestCertOptions (10.18s)

                                                
                                    
x
+
TestCertExpiration (196.21s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-067000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-067000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (10.822362583s)

                                                
                                                
-- stdout --
	* [cert-expiration-067000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19446
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-067000" primary control-plane node in "cert-expiration-067000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-067000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-067000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-067000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-067000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-067000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.232078625s)

                                                
                                                
-- stdout --
	* [cert-expiration-067000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19446
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-067000" primary control-plane node in "cert-expiration-067000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-067000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-067000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-067000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-067000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-067000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19446
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-067000" primary control-plane node in "cert-expiration-067000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-067000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-067000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-067000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-08-14 09:59:54.790552 -0700 PDT m=+3049.027104459
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-067000 -n cert-expiration-067000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-067000 -n cert-expiration-067000: exit status 7 (66.342125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-067000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-067000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-067000
--- FAIL: TestCertExpiration (196.21s)

                                                
                                    
x
+
TestDockerFlags (12.26s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-846000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-846000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (12.015920083s)

                                                
                                                
-- stdout --
	* [docker-flags-846000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19446
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-846000" primary control-plane node in "docker-flags-846000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-846000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:56:47.410939    4453 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:56:47.411079    4453 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:56:47.411083    4453 out.go:304] Setting ErrFile to fd 2...
	I0814 09:56:47.411085    4453 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:56:47.411207    4453 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:56:47.412305    4453 out.go:298] Setting JSON to false
	I0814 09:56:47.428948    4453 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3364,"bootTime":1723651243,"procs":498,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0814 09:56:47.429021    4453 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0814 09:56:47.435054    4453 out.go:177] * [docker-flags-846000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0814 09:56:47.443924    4453 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 09:56:47.443957    4453 notify.go:220] Checking for updates...
	I0814 09:56:47.451817    4453 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	I0814 09:56:47.453165    4453 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0814 09:56:47.455854    4453 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 09:56:47.458872    4453 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	I0814 09:56:47.461889    4453 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 09:56:47.465181    4453 config.go:182] Loaded profile config "cert-expiration-067000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:56:47.465247    4453 config.go:182] Loaded profile config "multinode-157000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:56:47.465295    4453 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 09:56:47.469904    4453 out.go:177] * Using the qemu2 driver based on user configuration
	I0814 09:56:47.476832    4453 start.go:297] selected driver: qemu2
	I0814 09:56:47.476838    4453 start.go:901] validating driver "qemu2" against <nil>
	I0814 09:56:47.476844    4453 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 09:56:47.479058    4453 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0814 09:56:47.481861    4453 out.go:177] * Automatically selected the socket_vmnet network
	I0814 09:56:47.484962    4453 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0814 09:56:47.484981    4453 cni.go:84] Creating CNI manager for ""
	I0814 09:56:47.484987    4453 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0814 09:56:47.484996    4453 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0814 09:56:47.485022    4453 start.go:340] cluster config:
	{Name:docker-flags-846000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-846000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 09:56:47.488320    4453 iso.go:125] acquiring lock: {Name:mkda50f549908e1040ce4e19ea38376a2f640f3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:56:47.495842    4453 out.go:177] * Starting "docker-flags-846000" primary control-plane node in "docker-flags-846000" cluster
	I0814 09:56:47.498737    4453 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0814 09:56:47.498750    4453 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0814 09:56:47.498759    4453 cache.go:56] Caching tarball of preloaded images
	I0814 09:56:47.498817    4453 preload.go:172] Found /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0814 09:56:47.498821    4453 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0814 09:56:47.498888    4453 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/docker-flags-846000/config.json ...
	I0814 09:56:47.498898    4453 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/docker-flags-846000/config.json: {Name:mkb857c6e5be45fac46c0d00781fbd499d4e07d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:56:47.499154    4453 start.go:360] acquireMachinesLock for docker-flags-846000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:56:49.526785    4453 start.go:364] duration metric: took 2.027675375s to acquireMachinesLock for "docker-flags-846000"
	I0814 09:56:49.526988    4453 start.go:93] Provisioning new machine with config: &{Name:docker-flags-846000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-846000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0814 09:56:49.527231    4453 start.go:125] createHost starting for "" (driver="qemu2")
	I0814 09:56:49.535458    4453 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0814 09:56:49.585552    4453 start.go:159] libmachine.API.Create for "docker-flags-846000" (driver="qemu2")
	I0814 09:56:49.585613    4453 client.go:168] LocalClient.Create starting
	I0814 09:56:49.585754    4453 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem
	I0814 09:56:49.585819    4453 main.go:141] libmachine: Decoding PEM data...
	I0814 09:56:49.585835    4453 main.go:141] libmachine: Parsing certificate...
	I0814 09:56:49.585904    4453 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/cert.pem
	I0814 09:56:49.585948    4453 main.go:141] libmachine: Decoding PEM data...
	I0814 09:56:49.585965    4453 main.go:141] libmachine: Parsing certificate...
	I0814 09:56:49.586604    4453 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19446-1067/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0814 09:56:49.745141    4453 main.go:141] libmachine: Creating SSH key...
	I0814 09:56:49.848579    4453 main.go:141] libmachine: Creating Disk image...
	I0814 09:56:49.848589    4453 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0814 09:56:49.848763    4453 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/docker-flags-846000/disk.qcow2.raw /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/docker-flags-846000/disk.qcow2
	I0814 09:56:49.858050    4453 main.go:141] libmachine: STDOUT: 
	I0814 09:56:49.858077    4453 main.go:141] libmachine: STDERR: 
	I0814 09:56:49.858121    4453 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/docker-flags-846000/disk.qcow2 +20000M
	I0814 09:56:49.865929    4453 main.go:141] libmachine: STDOUT: Image resized.
	
	I0814 09:56:49.865944    4453 main.go:141] libmachine: STDERR: 
	I0814 09:56:49.865965    4453 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/docker-flags-846000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/docker-flags-846000/disk.qcow2
	I0814 09:56:49.865971    4453 main.go:141] libmachine: Starting QEMU VM...
	I0814 09:56:49.865983    4453 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:56:49.866021    4453 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/docker-flags-846000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/docker-flags-846000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/docker-flags-846000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:40:90:c8:8a:cb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/docker-flags-846000/disk.qcow2
	I0814 09:56:49.867615    4453 main.go:141] libmachine: STDOUT: 
	I0814 09:56:49.867629    4453 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:56:49.867648    4453 client.go:171] duration metric: took 282.039417ms to LocalClient.Create
	I0814 09:56:51.869891    4453 start.go:128] duration metric: took 2.342633042s to createHost
	I0814 09:56:51.869995    4453 start.go:83] releasing machines lock for "docker-flags-846000", held for 2.343259916s
	W0814 09:56:51.870050    4453 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:56:51.877267    4453 out.go:177] * Deleting "docker-flags-846000" in qemu2 ...
	W0814 09:56:51.910072    4453 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:56:51.910093    4453 start.go:729] Will try again in 5 seconds ...
	I0814 09:56:56.912149    4453 start.go:360] acquireMachinesLock for docker-flags-846000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:56:56.912591    4453 start.go:364] duration metric: took 364.083µs to acquireMachinesLock for "docker-flags-846000"
	I0814 09:56:56.912712    4453 start.go:93] Provisioning new machine with config: &{Name:docker-flags-846000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-846000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0814 09:56:56.912997    4453 start.go:125] createHost starting for "" (driver="qemu2")
	I0814 09:56:56.920616    4453 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0814 09:56:56.970353    4453 start.go:159] libmachine.API.Create for "docker-flags-846000" (driver="qemu2")
	I0814 09:56:56.970401    4453 client.go:168] LocalClient.Create starting
	I0814 09:56:56.970508    4453 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem
	I0814 09:56:56.970568    4453 main.go:141] libmachine: Decoding PEM data...
	I0814 09:56:56.970581    4453 main.go:141] libmachine: Parsing certificate...
	I0814 09:56:56.970659    4453 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/cert.pem
	I0814 09:56:56.970715    4453 main.go:141] libmachine: Decoding PEM data...
	I0814 09:56:56.970730    4453 main.go:141] libmachine: Parsing certificate...
	I0814 09:56:56.971432    4453 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19446-1067/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0814 09:56:57.129977    4453 main.go:141] libmachine: Creating SSH key...
	I0814 09:56:57.327421    4453 main.go:141] libmachine: Creating Disk image...
	I0814 09:56:57.327435    4453 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0814 09:56:57.327620    4453 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/docker-flags-846000/disk.qcow2.raw /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/docker-flags-846000/disk.qcow2
	I0814 09:56:57.337089    4453 main.go:141] libmachine: STDOUT: 
	I0814 09:56:57.337168    4453 main.go:141] libmachine: STDERR: 
	I0814 09:56:57.337218    4453 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/docker-flags-846000/disk.qcow2 +20000M
	I0814 09:56:57.345175    4453 main.go:141] libmachine: STDOUT: Image resized.
	
	I0814 09:56:57.345198    4453 main.go:141] libmachine: STDERR: 
	I0814 09:56:57.345213    4453 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/docker-flags-846000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/docker-flags-846000/disk.qcow2
	I0814 09:56:57.345221    4453 main.go:141] libmachine: Starting QEMU VM...
	I0814 09:56:57.345228    4453 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:56:57.345270    4453 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/docker-flags-846000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/docker-flags-846000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/docker-flags-846000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:ac:0b:3e:2b:cb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/docker-flags-846000/disk.qcow2
	I0814 09:56:57.346904    4453 main.go:141] libmachine: STDOUT: 
	I0814 09:56:57.346918    4453 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:56:57.346932    4453 client.go:171] duration metric: took 376.541834ms to LocalClient.Create
	I0814 09:56:59.349090    4453 start.go:128] duration metric: took 2.436165875s to createHost
	I0814 09:56:59.349162    4453 start.go:83] releasing machines lock for "docker-flags-846000", held for 2.436654708s
	W0814 09:56:59.349532    4453 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-846000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-846000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:56:59.360095    4453 out.go:177] 
	W0814 09:56:59.371132    4453 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0814 09:56:59.371174    4453 out.go:239] * 
	* 
	W0814 09:56:59.373719    4453 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 09:56:59.383089    4453 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-846000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-846000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-846000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (79.18ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-846000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-846000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-846000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-846000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-846000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-846000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-846000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-846000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-846000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (44.798542ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-846000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-846000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-846000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-846000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-846000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-846000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-08-14 09:56:59.522344 -0700 PDT m=+2873.751228168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-846000 -n docker-flags-846000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-846000 -n docker-flags-846000: exit status 7 (29.77075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-846000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-846000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-846000
--- FAIL: TestDockerFlags (12.26s)

                                                
                                    
x
+
TestForceSystemdFlag (12.44s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-505000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-505000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (12.259584792s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-505000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19446
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-505000" primary control-plane node in "force-systemd-flag-505000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-505000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:56:11.683353    4296 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:56:11.683490    4296 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:56:11.683494    4296 out.go:304] Setting ErrFile to fd 2...
	I0814 09:56:11.683496    4296 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:56:11.683632    4296 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:56:11.684930    4296 out.go:298] Setting JSON to false
	I0814 09:56:11.704342    4296 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3328,"bootTime":1723651243,"procs":510,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0814 09:56:11.704464    4296 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0814 09:56:11.716276    4296 out.go:177] * [force-systemd-flag-505000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0814 09:56:11.719411    4296 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 09:56:11.719411    4296 notify.go:220] Checking for updates...
	I0814 09:56:11.726321    4296 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	I0814 09:56:11.730360    4296 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0814 09:56:11.734313    4296 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 09:56:11.737329    4296 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	I0814 09:56:11.740340    4296 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 09:56:11.743544    4296 config.go:182] Loaded profile config "NoKubernetes-463000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:56:11.743606    4296 config.go:182] Loaded profile config "multinode-157000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:56:11.743652    4296 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 09:56:11.748313    4296 out.go:177] * Using the qemu2 driver based on user configuration
	I0814 09:56:11.755327    4296 start.go:297] selected driver: qemu2
	I0814 09:56:11.755333    4296 start.go:901] validating driver "qemu2" against <nil>
	I0814 09:56:11.755338    4296 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 09:56:11.757420    4296 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0814 09:56:11.760309    4296 out.go:177] * Automatically selected the socket_vmnet network
	I0814 09:56:11.763395    4296 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0814 09:56:11.763409    4296 cni.go:84] Creating CNI manager for ""
	I0814 09:56:11.763419    4296 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0814 09:56:11.763422    4296 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0814 09:56:11.763446    4296 start.go:340] cluster config:
	{Name:force-systemd-flag-505000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-505000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 09:56:11.766675    4296 iso.go:125] acquiring lock: {Name:mkda50f549908e1040ce4e19ea38376a2f640f3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:56:11.774350    4296 out.go:177] * Starting "force-systemd-flag-505000" primary control-plane node in "force-systemd-flag-505000" cluster
	I0814 09:56:11.778086    4296 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0814 09:56:11.778097    4296 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0814 09:56:11.778105    4296 cache.go:56] Caching tarball of preloaded images
	I0814 09:56:11.778149    4296 preload.go:172] Found /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0814 09:56:11.778154    4296 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0814 09:56:11.778203    4296 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/force-systemd-flag-505000/config.json ...
	I0814 09:56:11.778213    4296 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/force-systemd-flag-505000/config.json: {Name:mkeca5e86bafcb77b9e635330061368a2e269537 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:56:11.778535    4296 start.go:360] acquireMachinesLock for force-systemd-flag-505000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:56:13.850692    4296 start.go:364] duration metric: took 2.072218459s to acquireMachinesLock for "force-systemd-flag-505000"
	I0814 09:56:13.850899    4296 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-505000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-505000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0814 09:56:13.851138    4296 start.go:125] createHost starting for "" (driver="qemu2")
	I0814 09:56:13.860288    4296 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0814 09:56:13.909637    4296 start.go:159] libmachine.API.Create for "force-systemd-flag-505000" (driver="qemu2")
	I0814 09:56:13.909695    4296 client.go:168] LocalClient.Create starting
	I0814 09:56:13.909811    4296 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem
	I0814 09:56:13.909870    4296 main.go:141] libmachine: Decoding PEM data...
	I0814 09:56:13.909893    4296 main.go:141] libmachine: Parsing certificate...
	I0814 09:56:13.909961    4296 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/cert.pem
	I0814 09:56:13.910005    4296 main.go:141] libmachine: Decoding PEM data...
	I0814 09:56:13.910022    4296 main.go:141] libmachine: Parsing certificate...
	I0814 09:56:13.910659    4296 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19446-1067/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0814 09:56:14.162875    4296 main.go:141] libmachine: Creating SSH key...
	I0814 09:56:14.324880    4296 main.go:141] libmachine: Creating Disk image...
	I0814 09:56:14.324886    4296 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0814 09:56:14.325054    4296 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/force-systemd-flag-505000/disk.qcow2.raw /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/force-systemd-flag-505000/disk.qcow2
	I0814 09:56:14.334879    4296 main.go:141] libmachine: STDOUT: 
	I0814 09:56:14.334900    4296 main.go:141] libmachine: STDERR: 
	I0814 09:56:14.334946    4296 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/force-systemd-flag-505000/disk.qcow2 +20000M
	I0814 09:56:14.342977    4296 main.go:141] libmachine: STDOUT: Image resized.
	
	I0814 09:56:14.342994    4296 main.go:141] libmachine: STDERR: 
	I0814 09:56:14.343013    4296 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/force-systemd-flag-505000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/force-systemd-flag-505000/disk.qcow2
	I0814 09:56:14.343020    4296 main.go:141] libmachine: Starting QEMU VM...
	I0814 09:56:14.343037    4296 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:56:14.343061    4296 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/force-systemd-flag-505000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/force-systemd-flag-505000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/force-systemd-flag-505000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:c8:de:13:33:7c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/force-systemd-flag-505000/disk.qcow2
	I0814 09:56:14.344704    4296 main.go:141] libmachine: STDOUT: 
	I0814 09:56:14.344724    4296 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:56:14.344744    4296 client.go:171] duration metric: took 435.061375ms to LocalClient.Create
	I0814 09:56:16.346826    4296 start.go:128] duration metric: took 2.495770709s to createHost
	I0814 09:56:16.346886    4296 start.go:83] releasing machines lock for "force-systemd-flag-505000", held for 2.49625225s
	W0814 09:56:16.347030    4296 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:56:16.362491    4296 out.go:177] * Deleting "force-systemd-flag-505000" in qemu2 ...
	W0814 09:56:16.392691    4296 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:56:16.392716    4296 start.go:729] Will try again in 5 seconds ...
	I0814 09:56:21.394304    4296 start.go:360] acquireMachinesLock for force-systemd-flag-505000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:56:21.403304    4296 start.go:364] duration metric: took 8.929083ms to acquireMachinesLock for "force-systemd-flag-505000"
	I0814 09:56:21.403348    4296 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-505000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-505000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0814 09:56:21.403536    4296 start.go:125] createHost starting for "" (driver="qemu2")
	I0814 09:56:21.414643    4296 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0814 09:56:21.462235    4296 start.go:159] libmachine.API.Create for "force-systemd-flag-505000" (driver="qemu2")
	I0814 09:56:21.462368    4296 client.go:168] LocalClient.Create starting
	I0814 09:56:21.462496    4296 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem
	I0814 09:56:21.462564    4296 main.go:141] libmachine: Decoding PEM data...
	I0814 09:56:21.462579    4296 main.go:141] libmachine: Parsing certificate...
	I0814 09:56:21.462653    4296 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/cert.pem
	I0814 09:56:21.462700    4296 main.go:141] libmachine: Decoding PEM data...
	I0814 09:56:21.462710    4296 main.go:141] libmachine: Parsing certificate...
	I0814 09:56:21.463185    4296 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19446-1067/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0814 09:56:21.703285    4296 main.go:141] libmachine: Creating SSH key...
	I0814 09:56:21.834331    4296 main.go:141] libmachine: Creating Disk image...
	I0814 09:56:21.834337    4296 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0814 09:56:21.834520    4296 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/force-systemd-flag-505000/disk.qcow2.raw /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/force-systemd-flag-505000/disk.qcow2
	I0814 09:56:21.844222    4296 main.go:141] libmachine: STDOUT: 
	I0814 09:56:21.844253    4296 main.go:141] libmachine: STDERR: 
	I0814 09:56:21.844315    4296 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/force-systemd-flag-505000/disk.qcow2 +20000M
	I0814 09:56:21.852208    4296 main.go:141] libmachine: STDOUT: Image resized.
	
	I0814 09:56:21.852231    4296 main.go:141] libmachine: STDERR: 
	I0814 09:56:21.852244    4296 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/force-systemd-flag-505000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/force-systemd-flag-505000/disk.qcow2
	I0814 09:56:21.852249    4296 main.go:141] libmachine: Starting QEMU VM...
	I0814 09:56:21.852267    4296 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:56:21.852294    4296 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/force-systemd-flag-505000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/force-systemd-flag-505000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/force-systemd-flag-505000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:f8:42:fc:b4:71 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/force-systemd-flag-505000/disk.qcow2
	I0814 09:56:21.853900    4296 main.go:141] libmachine: STDOUT: 
	I0814 09:56:21.853919    4296 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:56:21.853933    4296 client.go:171] duration metric: took 391.572708ms to LocalClient.Create
	I0814 09:56:23.856025    4296 start.go:128] duration metric: took 2.452558375s to createHost
	I0814 09:56:23.856095    4296 start.go:83] releasing machines lock for "force-systemd-flag-505000", held for 2.452876084s
	W0814 09:56:23.856472    4296 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-505000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-505000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:56:23.865021    4296 out.go:177] 
	W0814 09:56:23.878235    4296 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0814 09:56:23.878270    4296 out.go:239] * 
	* 
	W0814 09:56:23.880844    4296 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 09:56:23.893151    4296 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-505000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-505000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-505000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (73.297958ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-505000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-505000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-505000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-08-14 09:56:23.988073 -0700 PDT m=+2838.215403126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-505000 -n force-systemd-flag-505000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-505000 -n force-systemd-flag-505000: exit status 7 (34.656958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-505000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-505000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-505000
--- FAIL: TestForceSystemdFlag (12.44s)

                                                
                                    
x
+
TestForceSystemdEnv (10.04s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-202000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-202000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.799769791s)

                                                
                                                
-- stdout --
	* [force-systemd-env-202000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19446
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-202000" primary control-plane node in "force-systemd-env-202000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-202000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:56:37.375330    4412 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:56:37.375456    4412 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:56:37.375460    4412 out.go:304] Setting ErrFile to fd 2...
	I0814 09:56:37.375462    4412 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:56:37.375580    4412 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:56:37.376517    4412 out.go:298] Setting JSON to false
	I0814 09:56:37.394233    4412 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3354,"bootTime":1723651243,"procs":511,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0814 09:56:37.394307    4412 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0814 09:56:37.400391    4412 out.go:177] * [force-systemd-env-202000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0814 09:56:37.408321    4412 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 09:56:37.408374    4412 notify.go:220] Checking for updates...
	I0814 09:56:37.415272    4412 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	I0814 09:56:37.418303    4412 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0814 09:56:37.421345    4412 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 09:56:37.424342    4412 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	I0814 09:56:37.427345    4412 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0814 09:56:37.430672    4412 config.go:182] Loaded profile config "multinode-157000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:56:37.430722    4412 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 09:56:37.435221    4412 out.go:177] * Using the qemu2 driver based on user configuration
	I0814 09:56:37.442318    4412 start.go:297] selected driver: qemu2
	I0814 09:56:37.442324    4412 start.go:901] validating driver "qemu2" against <nil>
	I0814 09:56:37.442329    4412 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 09:56:37.444680    4412 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0814 09:56:37.448337    4412 out.go:177] * Automatically selected the socket_vmnet network
	I0814 09:56:37.451327    4412 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0814 09:56:37.451362    4412 cni.go:84] Creating CNI manager for ""
	I0814 09:56:37.451369    4412 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0814 09:56:37.451373    4412 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0814 09:56:37.451399    4412 start.go:340] cluster config:
	{Name:force-systemd-env-202000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-202000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 09:56:37.455373    4412 iso.go:125] acquiring lock: {Name:mkda50f549908e1040ce4e19ea38376a2f640f3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:56:37.463244    4412 out.go:177] * Starting "force-systemd-env-202000" primary control-plane node in "force-systemd-env-202000" cluster
	I0814 09:56:37.467326    4412 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0814 09:56:37.467346    4412 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0814 09:56:37.467361    4412 cache.go:56] Caching tarball of preloaded images
	I0814 09:56:37.467441    4412 preload.go:172] Found /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0814 09:56:37.467447    4412 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0814 09:56:37.467520    4412 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/force-systemd-env-202000/config.json ...
	I0814 09:56:37.467532    4412 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/force-systemd-env-202000/config.json: {Name:mk7513f4b22a73430b94252764da1686c5106b7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:56:37.467875    4412 start.go:360] acquireMachinesLock for force-systemd-env-202000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:56:37.467915    4412 start.go:364] duration metric: took 31.125µs to acquireMachinesLock for "force-systemd-env-202000"
	I0814 09:56:37.467929    4412 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-202000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-202000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0814 09:56:37.467961    4412 start.go:125] createHost starting for "" (driver="qemu2")
	I0814 09:56:37.475340    4412 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0814 09:56:37.493761    4412 start.go:159] libmachine.API.Create for "force-systemd-env-202000" (driver="qemu2")
	I0814 09:56:37.493794    4412 client.go:168] LocalClient.Create starting
	I0814 09:56:37.493861    4412 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem
	I0814 09:56:37.493894    4412 main.go:141] libmachine: Decoding PEM data...
	I0814 09:56:37.493903    4412 main.go:141] libmachine: Parsing certificate...
	I0814 09:56:37.493939    4412 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/cert.pem
	I0814 09:56:37.493965    4412 main.go:141] libmachine: Decoding PEM data...
	I0814 09:56:37.493975    4412 main.go:141] libmachine: Parsing certificate...
	I0814 09:56:37.494381    4412 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19446-1067/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0814 09:56:37.643535    4412 main.go:141] libmachine: Creating SSH key...
	I0814 09:56:37.752024    4412 main.go:141] libmachine: Creating Disk image...
	I0814 09:56:37.752029    4412 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0814 09:56:37.752197    4412 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/force-systemd-env-202000/disk.qcow2.raw /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/force-systemd-env-202000/disk.qcow2
	I0814 09:56:37.761817    4412 main.go:141] libmachine: STDOUT: 
	I0814 09:56:37.761838    4412 main.go:141] libmachine: STDERR: 
	I0814 09:56:37.761898    4412 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/force-systemd-env-202000/disk.qcow2 +20000M
	I0814 09:56:37.770104    4412 main.go:141] libmachine: STDOUT: Image resized.
	
	I0814 09:56:37.770119    4412 main.go:141] libmachine: STDERR: 
	I0814 09:56:37.770143    4412 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/force-systemd-env-202000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/force-systemd-env-202000/disk.qcow2
	I0814 09:56:37.770148    4412 main.go:141] libmachine: Starting QEMU VM...
	I0814 09:56:37.770164    4412 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:56:37.770201    4412 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/force-systemd-env-202000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/force-systemd-env-202000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/force-systemd-env-202000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:3a:cd:88:b4:01 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/force-systemd-env-202000/disk.qcow2
	I0814 09:56:37.771857    4412 main.go:141] libmachine: STDOUT: 
	I0814 09:56:37.771873    4412 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:56:37.771893    4412 client.go:171] duration metric: took 278.106791ms to LocalClient.Create
	I0814 09:56:39.774004    4412 start.go:128] duration metric: took 2.306120375s to createHost
	I0814 09:56:39.774059    4412 start.go:83] releasing machines lock for "force-systemd-env-202000", held for 2.306233583s
	W0814 09:56:39.774139    4412 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:56:39.792277    4412 out.go:177] * Deleting "force-systemd-env-202000" in qemu2 ...
	W0814 09:56:39.817289    4412 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:56:39.817312    4412 start.go:729] Will try again in 5 seconds ...
	I0814 09:56:44.819398    4412 start.go:360] acquireMachinesLock for force-systemd-env-202000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:56:44.820025    4412 start.go:364] duration metric: took 493µs to acquireMachinesLock for "force-systemd-env-202000"
	I0814 09:56:44.820183    4412 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-202000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-202000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0814 09:56:44.820482    4412 start.go:125] createHost starting for "" (driver="qemu2")
	I0814 09:56:44.836049    4412 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0814 09:56:44.887510    4412 start.go:159] libmachine.API.Create for "force-systemd-env-202000" (driver="qemu2")
	I0814 09:56:44.887560    4412 client.go:168] LocalClient.Create starting
	I0814 09:56:44.887664    4412 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem
	I0814 09:56:44.887740    4412 main.go:141] libmachine: Decoding PEM data...
	I0814 09:56:44.887756    4412 main.go:141] libmachine: Parsing certificate...
	I0814 09:56:44.887826    4412 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/cert.pem
	I0814 09:56:44.887869    4412 main.go:141] libmachine: Decoding PEM data...
	I0814 09:56:44.887883    4412 main.go:141] libmachine: Parsing certificate...
	I0814 09:56:44.888414    4412 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19446-1067/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0814 09:56:45.049432    4412 main.go:141] libmachine: Creating SSH key...
	I0814 09:56:45.079636    4412 main.go:141] libmachine: Creating Disk image...
	I0814 09:56:45.079641    4412 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0814 09:56:45.079803    4412 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/force-systemd-env-202000/disk.qcow2.raw /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/force-systemd-env-202000/disk.qcow2
	I0814 09:56:45.089058    4412 main.go:141] libmachine: STDOUT: 
	I0814 09:56:45.089077    4412 main.go:141] libmachine: STDERR: 
	I0814 09:56:45.089124    4412 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/force-systemd-env-202000/disk.qcow2 +20000M
	I0814 09:56:45.097056    4412 main.go:141] libmachine: STDOUT: Image resized.
	
	I0814 09:56:45.097078    4412 main.go:141] libmachine: STDERR: 
	I0814 09:56:45.097089    4412 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/force-systemd-env-202000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/force-systemd-env-202000/disk.qcow2
	I0814 09:56:45.097094    4412 main.go:141] libmachine: Starting QEMU VM...
	I0814 09:56:45.097100    4412 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:56:45.097129    4412 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/force-systemd-env-202000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/force-systemd-env-202000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/force-systemd-env-202000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:ad:a4:75:09:9f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/force-systemd-env-202000/disk.qcow2
	I0814 09:56:45.098765    4412 main.go:141] libmachine: STDOUT: 
	I0814 09:56:45.098783    4412 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:56:45.098803    4412 client.go:171] duration metric: took 211.245417ms to LocalClient.Create
	I0814 09:56:47.100889    4412 start.go:128] duration metric: took 2.28047725s to createHost
	I0814 09:56:47.100938    4412 start.go:83] releasing machines lock for "force-systemd-env-202000", held for 2.280989292s
	W0814 09:56:47.101327    4412 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-202000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-202000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:56:47.114883    4412 out.go:177] 
	W0814 09:56:47.119920    4412 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0814 09:56:47.119945    4412 out.go:239] * 
	* 
	W0814 09:56:47.122641    4412 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 09:56:47.130865    4412 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-202000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-202000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-202000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (82.634708ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-202000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-202000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-202000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-08-14 09:56:47.230238 -0700 PDT m=+2861.458584168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-202000 -n force-systemd-env-202000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-202000 -n force-systemd-env-202000: exit status 7 (38.285334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-202000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-202000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-202000
--- FAIL: TestForceSystemdEnv (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (38.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-363000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-363000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-j2lg9" [da46fb2f-6f02-4df1-9ea0-4c846b730267] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-j2lg9" [da46fb2f-6f02-4df1-9ea0-4c846b730267] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.005971375s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.105.4:30868
functional_test.go:1661: error fetching http://192.168.105.4:30868: Get "http://192.168.105.4:30868": dial tcp 192.168.105.4:30868: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30868: Get "http://192.168.105.4:30868": dial tcp 192.168.105.4:30868: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30868: Get "http://192.168.105.4:30868": dial tcp 192.168.105.4:30868: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30868: Get "http://192.168.105.4:30868": dial tcp 192.168.105.4:30868: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30868: Get "http://192.168.105.4:30868": dial tcp 192.168.105.4:30868: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30868: Get "http://192.168.105.4:30868": dial tcp 192.168.105.4:30868: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30868: Get "http://192.168.105.4:30868": dial tcp 192.168.105.4:30868: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30868: Get "http://192.168.105.4:30868": dial tcp 192.168.105.4:30868: connect: connection refused
functional_test.go:1681: failed to fetch http://192.168.105.4:30868: Get "http://192.168.105.4:30868": dial tcp 192.168.105.4:30868: connect: connection refused
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-363000 describe po hello-node-connect
functional_test.go:1606: hello-node pod describe:
Name:             hello-node-connect-65d86f57f4-j2lg9
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-363000/192.168.105.4
Start Time:       Wed, 14 Aug 2024 09:19:11 -0700
Labels:           app=hello-node-connect
pod-template-hash=65d86f57f4
Annotations:      <none>
Status:           Running
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-65d86f57f4
Containers:
echoserver-arm:
Container ID:   docker://88f4d2adcb2d0f9cc863e82c747138328c37c123de0b9d44f4364d3757b21e7f
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Wed, 14 Aug 2024 09:19:33 -0700
Finished:     Wed, 14 Aug 2024 09:19:33 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-krbgb (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-krbgb:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  37s                default-scheduler  Successfully assigned default/hello-node-connect-65d86f57f4-j2lg9 to functional-363000
Normal   Pulling    37s                kubelet            Pulling image "registry.k8s.io/echoserver-arm:1.8"
Normal   Pulled     33s                kubelet            Successfully pulled image "registry.k8s.io/echoserver-arm:1.8" in 3.681s (3.691s including waiting). Image size: 84957542 bytes.
Normal   Created    15s (x3 over 33s)  kubelet            Created container echoserver-arm
Normal   Started    15s (x3 over 33s)  kubelet            Started container echoserver-arm
Normal   Pulled     15s (x2 over 32s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Warning  BackOff    4s (x4 over 31s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-65d86f57f4-j2lg9_default(da46fb2f-6f02-4df1-9ea0-4c846b730267)

                                                
                                                
functional_test.go:1608: (dbg) Run:  kubectl --context functional-363000 logs -l app=hello-node-connect
functional_test.go:1612: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1614: (dbg) Run:  kubectl --context functional-363000 describe svc hello-node-connect
functional_test.go:1618: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.97.123.222
IPs:                      10.97.123.222
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30868/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-363000 -n functional-363000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|-----------|--------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                                        Args                                                        |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|--------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh       | functional-363000 ssh findmnt                                                                                      | functional-363000 | jenkins | v1.33.1 | 14 Aug 24 09:19 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                             |                   |         |         |                     |                     |
	| ssh       | functional-363000 ssh findmnt                                                                                      | functional-363000 | jenkins | v1.33.1 | 14 Aug 24 09:19 PDT | 14 Aug 24 09:19 PDT |
	|           | -T /mount-9p | grep 9p                                                                                             |                   |         |         |                     |                     |
	| ssh       | functional-363000 ssh -- ls                                                                                        | functional-363000 | jenkins | v1.33.1 | 14 Aug 24 09:19 PDT | 14 Aug 24 09:19 PDT |
	|           | -la /mount-9p                                                                                                      |                   |         |         |                     |                     |
	| ssh       | functional-363000 ssh cat                                                                                          | functional-363000 | jenkins | v1.33.1 | 14 Aug 24 09:19 PDT | 14 Aug 24 09:19 PDT |
	|           | /mount-9p/test-1723652374185009000                                                                                 |                   |         |         |                     |                     |
	| ssh       | functional-363000 ssh stat                                                                                         | functional-363000 | jenkins | v1.33.1 | 14 Aug 24 09:19 PDT | 14 Aug 24 09:19 PDT |
	|           | /mount-9p/created-by-test                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-363000 ssh stat                                                                                         | functional-363000 | jenkins | v1.33.1 | 14 Aug 24 09:19 PDT | 14 Aug 24 09:19 PDT |
	|           | /mount-9p/created-by-pod                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-363000 ssh sudo                                                                                         | functional-363000 | jenkins | v1.33.1 | 14 Aug 24 09:19 PDT | 14 Aug 24 09:19 PDT |
	|           | umount -f /mount-9p                                                                                                |                   |         |         |                     |                     |
	| ssh       | functional-363000 ssh findmnt                                                                                      | functional-363000 | jenkins | v1.33.1 | 14 Aug 24 09:19 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                             |                   |         |         |                     |                     |
	| mount     | -p functional-363000                                                                                               | functional-363000 | jenkins | v1.33.1 | 14 Aug 24 09:19 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port7519249/001:/mount-9p  |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                                                                |                   |         |         |                     |                     |
	| ssh       | functional-363000 ssh findmnt                                                                                      | functional-363000 | jenkins | v1.33.1 | 14 Aug 24 09:19 PDT | 14 Aug 24 09:19 PDT |
	|           | -T /mount-9p | grep 9p                                                                                             |                   |         |         |                     |                     |
	| ssh       | functional-363000 ssh -- ls                                                                                        | functional-363000 | jenkins | v1.33.1 | 14 Aug 24 09:19 PDT | 14 Aug 24 09:19 PDT |
	|           | -la /mount-9p                                                                                                      |                   |         |         |                     |                     |
	| ssh       | functional-363000 ssh sudo                                                                                         | functional-363000 | jenkins | v1.33.1 | 14 Aug 24 09:19 PDT |                     |
	|           | umount -f /mount-9p                                                                                                |                   |         |         |                     |                     |
	| mount     | -p functional-363000                                                                                               | functional-363000 | jenkins | v1.33.1 | 14 Aug 24 09:19 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2836227861/001:/mount2 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                             |                   |         |         |                     |                     |
	| mount     | -p functional-363000                                                                                               | functional-363000 | jenkins | v1.33.1 | 14 Aug 24 09:19 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2836227861/001:/mount1 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                             |                   |         |         |                     |                     |
	| ssh       | functional-363000 ssh findmnt                                                                                      | functional-363000 | jenkins | v1.33.1 | 14 Aug 24 09:19 PDT |                     |
	|           | -T /mount1                                                                                                         |                   |         |         |                     |                     |
	| mount     | -p functional-363000                                                                                               | functional-363000 | jenkins | v1.33.1 | 14 Aug 24 09:19 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2836227861/001:/mount3 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                             |                   |         |         |                     |                     |
	| ssh       | functional-363000 ssh findmnt                                                                                      | functional-363000 | jenkins | v1.33.1 | 14 Aug 24 09:19 PDT |                     |
	|           | -T /mount1                                                                                                         |                   |         |         |                     |                     |
	| ssh       | functional-363000 ssh findmnt                                                                                      | functional-363000 | jenkins | v1.33.1 | 14 Aug 24 09:19 PDT | 14 Aug 24 09:19 PDT |
	|           | -T /mount1                                                                                                         |                   |         |         |                     |                     |
	| ssh       | functional-363000 ssh findmnt                                                                                      | functional-363000 | jenkins | v1.33.1 | 14 Aug 24 09:19 PDT | 14 Aug 24 09:19 PDT |
	|           | -T /mount2                                                                                                         |                   |         |         |                     |                     |
	| ssh       | functional-363000 ssh findmnt                                                                                      | functional-363000 | jenkins | v1.33.1 | 14 Aug 24 09:19 PDT | 14 Aug 24 09:19 PDT |
	|           | -T /mount3                                                                                                         |                   |         |         |                     |                     |
	| mount     | -p functional-363000                                                                                               | functional-363000 | jenkins | v1.33.1 | 14 Aug 24 09:19 PDT |                     |
	|           | --kill=true                                                                                                        |                   |         |         |                     |                     |
	| start     | -p functional-363000                                                                                               | functional-363000 | jenkins | v1.33.1 | 14 Aug 24 09:19 PDT |                     |
	|           | --dry-run --memory                                                                                                 |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                            |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                     |                   |         |         |                     |                     |
	| start     | -p functional-363000                                                                                               | functional-363000 | jenkins | v1.33.1 | 14 Aug 24 09:19 PDT |                     |
	|           | --dry-run --memory                                                                                                 |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                            |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                     |                   |         |         |                     |                     |
	| start     | -p functional-363000 --dry-run                                                                                     | functional-363000 | jenkins | v1.33.1 | 14 Aug 24 09:19 PDT |                     |
	|           | --alsologtostderr -v=1                                                                                             |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                     |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                                                                 | functional-363000 | jenkins | v1.33.1 | 14 Aug 24 09:19 PDT |                     |
	|           | -p functional-363000                                                                                               |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                             |                   |         |         |                     |                     |
	|-----------|--------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/14 09:19:43
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 09:19:43.685173    2286 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:19:43.685316    2286 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:19:43.685320    2286 out.go:304] Setting ErrFile to fd 2...
	I0814 09:19:43.685325    2286 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:19:43.685456    2286 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:19:43.686671    2286 out.go:298] Setting JSON to false
	I0814 09:19:43.703324    2286 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1140,"bootTime":1723651243,"procs":501,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0814 09:19:43.703409    2286 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0814 09:19:43.707366    2286 out.go:177] * [functional-363000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0814 09:19:43.714402    2286 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 09:19:43.714524    2286 notify.go:220] Checking for updates...
	I0814 09:19:43.721302    2286 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	I0814 09:19:43.724378    2286 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0814 09:19:43.727356    2286 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 09:19:43.730309    2286 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	I0814 09:19:43.733379    2286 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 09:19:43.736624    2286 config.go:182] Loaded profile config "functional-363000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:19:43.736876    2286 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 09:19:43.741291    2286 out.go:177] * Using the qemu2 driver based on existing profile
	I0814 09:19:43.747280    2286 start.go:297] selected driver: qemu2
	I0814 09:19:43.747286    2286 start.go:901] validating driver "qemu2" against &{Name:functional-363000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:functional-363000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 09:19:43.747342    2286 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 09:19:43.749563    2286 cni.go:84] Creating CNI manager for ""
	I0814 09:19:43.749581    2286 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0814 09:19:43.749626    2286 start.go:340] cluster config:
	{Name:functional-363000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-363000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 09:19:43.760363    2286 out.go:177] * dry-run validation complete!
	
	
	==> Docker <==
	Aug 14 16:19:38 functional-363000 dockerd[5716]: time="2024-08-14T16:19:38.007655000Z" level=warning msg="cleaning up after shim disconnected" id=758ff334ba1789a735c7bd1c9d11b0abba1ed901703a80c79e93c851e0d0e084 namespace=moby
	Aug 14 16:19:38 functional-363000 dockerd[5716]: time="2024-08-14T16:19:38.007659084Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 14 16:19:40 functional-363000 dockerd[5709]: time="2024-08-14T16:19:40.067200779Z" level=info msg="ignoring event" container=4ab77e45ace1c0b308aed8e6cd7357138b76a5d5d84203802c20919fc7363590 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 14 16:19:40 functional-363000 dockerd[5716]: time="2024-08-14T16:19:40.067420795Z" level=info msg="shim disconnected" id=4ab77e45ace1c0b308aed8e6cd7357138b76a5d5d84203802c20919fc7363590 namespace=moby
	Aug 14 16:19:40 functional-363000 dockerd[5716]: time="2024-08-14T16:19:40.067449923Z" level=warning msg="cleaning up after shim disconnected" id=4ab77e45ace1c0b308aed8e6cd7357138b76a5d5d84203802c20919fc7363590 namespace=moby
	Aug 14 16:19:40 functional-363000 dockerd[5716]: time="2024-08-14T16:19:40.067453923Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 14 16:19:40 functional-363000 dockerd[5716]: time="2024-08-14T16:19:40.772319078Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 14 16:19:40 functional-363000 dockerd[5716]: time="2024-08-14T16:19:40.772473048Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 14 16:19:40 functional-363000 dockerd[5716]: time="2024-08-14T16:19:40.772497008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 14 16:19:40 functional-363000 dockerd[5716]: time="2024-08-14T16:19:40.772633019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 14 16:19:40 functional-363000 dockerd[5709]: time="2024-08-14T16:19:40.800474271Z" level=info msg="ignoring event" container=a2734e2fe64cc188275b9f3290f103da9156868e15ed04d692032ad241b8daa3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 14 16:19:40 functional-363000 dockerd[5716]: time="2024-08-14T16:19:40.800633867Z" level=info msg="shim disconnected" id=a2734e2fe64cc188275b9f3290f103da9156868e15ed04d692032ad241b8daa3 namespace=moby
	Aug 14 16:19:40 functional-363000 dockerd[5716]: time="2024-08-14T16:19:40.800665911Z" level=warning msg="cleaning up after shim disconnected" id=a2734e2fe64cc188275b9f3290f103da9156868e15ed04d692032ad241b8daa3 namespace=moby
	Aug 14 16:19:40 functional-363000 dockerd[5716]: time="2024-08-14T16:19:40.800692704Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 14 16:19:44 functional-363000 dockerd[5716]: time="2024-08-14T16:19:44.652272944Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 14 16:19:44 functional-363000 dockerd[5716]: time="2024-08-14T16:19:44.652321698Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 14 16:19:44 functional-363000 dockerd[5716]: time="2024-08-14T16:19:44.652327657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 14 16:19:44 functional-363000 dockerd[5716]: time="2024-08-14T16:19:44.652354409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 14 16:19:44 functional-363000 dockerd[5716]: time="2024-08-14T16:19:44.654172632Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 14 16:19:44 functional-363000 dockerd[5716]: time="2024-08-14T16:19:44.654205259Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 14 16:19:44 functional-363000 dockerd[5716]: time="2024-08-14T16:19:44.654211843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 14 16:19:44 functional-363000 dockerd[5716]: time="2024-08-14T16:19:44.654252138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 14 16:19:44 functional-363000 cri-dockerd[5968]: time="2024-08-14T16:19:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/28af71635630c4cbe2e9e45e1a65b8f5c5fc836aaa14da87f25e26d1bb69acaa/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 14 16:19:44 functional-363000 cri-dockerd[5968]: time="2024-08-14T16:19:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f402e6453e7d2a8b92d3ddf89be33d4fffee31e61dbfaff1d969bf021325f6b1/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 14 16:19:44 functional-363000 dockerd[5709]: time="2024-08-14T16:19:44.960299894Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	a2734e2fe64cc       72565bf5bbedf                                                                                         9 seconds ago        Exited              echoserver-arm            2                   3a3dc8fa222c4       hello-node-64b4f8f9ff-lbmsm
	758ff334ba178       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   12 seconds ago       Exited              mount-munger              0                   4ab77e45ace1c       busybox-mount
	88f4d2adcb2d0       72565bf5bbedf                                                                                         16 seconds ago       Exited              echoserver-arm            2                   74548afed5d81       hello-node-connect-65d86f57f4-j2lg9
	2a720b866ba88       nginx@sha256:98f8ec75657d21b924fe4f69b6b9bff2f6550ea48838af479d8894a852000e40                         29 seconds ago       Running             myfrontend                0                   f490f733510c6       sp-pod
	bab940a768d81       nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                         46 seconds ago       Running             nginx                     0                   86f8f66b98856       nginx-svc
	dba18618b0c91       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       3                   172852dea7762       storage-provisioner
	8014e19b6363f       2437cf7621777                                                                                         About a minute ago   Running             coredns                   2                   421891f1dcd05       coredns-6f6b679f8f-b9d8f
	fc679c3609f64       ba04bb24b9575                                                                                         About a minute ago   Exited              storage-provisioner       2                   172852dea7762       storage-provisioner
	c8fbdfad91ce7       71d55d66fd4ee                                                                                         About a minute ago   Running             kube-proxy                2                   023e229bae939       kube-proxy-cc6sb
	dc945b90a5470       fbbbd428abb4d                                                                                         About a minute ago   Running             kube-scheduler            2                   77ac6c35280be       kube-scheduler-functional-363000
	983d8166bb735       27e3830e14027                                                                                         About a minute ago   Running             etcd                      2                   588b45cb30486       etcd-functional-363000
	f64a473dc3123       fcb0683e6bdbd                                                                                         About a minute ago   Running             kube-controller-manager   2                   8dc2dcbf75771       kube-controller-manager-functional-363000
	a1bcebde6ec7a       cd0f0ae0ec9e0                                                                                         About a minute ago   Running             kube-apiserver            0                   2d3151946e890       kube-apiserver-functional-363000
	7605eb09d77b4       2437cf7621777                                                                                         2 minutes ago        Exited              coredns                   1                   a1796f80d133b       coredns-6f6b679f8f-b9d8f
	7f9e0a243a0eb       71d55d66fd4ee                                                                                         2 minutes ago        Exited              kube-proxy                1                   444d9eb8979a5       kube-proxy-cc6sb
	2e1b43a09f884       fcb0683e6bdbd                                                                                         2 minutes ago        Exited              kube-controller-manager   1                   bed8a4086dd30       kube-controller-manager-functional-363000
	52bc8cd2f6969       27e3830e14027                                                                                         2 minutes ago        Exited              etcd                      1                   7d789de70d814       etcd-functional-363000
	3101292bde727       fbbbd428abb4d                                                                                         2 minutes ago        Exited              kube-scheduler            1                   04d97c94a3439       kube-scheduler-functional-363000
	
	
	==> coredns [7605eb09d77b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:60073 - 8148 "HINFO IN 2086623852234398551.7630047033784918749. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009956885s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [8014e19b6363] <==
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:36386 - 56405 "HINFO IN 8054011498997772599.2515026190694622690. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.010743567s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1490872719]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (14-Aug-2024 16:18:15.273) (total time: 30001ms):
	Trace[1490872719]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (16:18:45.274)
	Trace[1490872719]: [30.001861917s] [30.001861917s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[328893237]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (14-Aug-2024 16:18:15.273) (total time: 30001ms):
	Trace[328893237]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (16:18:45.274)
	Trace[328893237]: [30.001778691s] [30.001778691s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[77201605]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (14-Aug-2024 16:18:15.273) (total time: 30001ms):
	Trace[77201605]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (16:18:45.275)
	Trace[77201605]: [30.00188562s] [30.00188562s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] 10.244.0.1:20466 - 44159 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000156138s
	[INFO] 10.244.0.1:60370 - 2329 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000064839s
	[INFO] 10.244.0.1:45007 - 50163 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.000899658s
	[INFO] 10.244.0.1:35378 - 5186 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000024752s
	[INFO] 10.244.0.1:29054 - 3080 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000064839s
	[INFO] 10.244.0.1:20993 - 44670 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000021544s
	
	
	==> describe nodes <==
	Name:               functional-363000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-363000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35
	                    minikube.k8s.io/name=functional-363000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_14T09_17_01_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Aug 2024 16:16:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-363000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Aug 2024 16:19:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Aug 2024 16:19:45 +0000   Wed, 14 Aug 2024 16:16:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Aug 2024 16:19:45 +0000   Wed, 14 Aug 2024 16:16:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Aug 2024 16:19:45 +0000   Wed, 14 Aug 2024 16:16:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Aug 2024 16:19:45 +0000   Wed, 14 Aug 2024 16:17:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-363000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 123fdf058c2449beb9a27cee9057b68e
	  System UUID:                123fdf058c2449beb9a27cee9057b68e
	  Boot ID:                    9464fcf8-5201-4c0b-8ba8-63825bf66f8c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-64b4f8f9ff-lbmsm                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	  default                     hello-node-connect-65d86f57f4-j2lg9          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38s
	  default                     nginx-svc                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         49s
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30s
	  kube-system                 coredns-6f6b679f8f-b9d8f                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m43s
	  kube-system                 etcd-functional-363000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m50s
	  kube-system                 kube-apiserver-functional-363000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	  kube-system                 kube-controller-manager-functional-363000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m49s
	  kube-system                 kube-proxy-cc6sb                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m44s
	  kube-system                 kube-scheduler-functional-363000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m49s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m43s
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-kfsdt    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-5x52z        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m43s                  kube-proxy       
	  Normal  Starting                 94s                    kube-proxy       
	  Normal  Starting                 2m16s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m53s (x8 over 2m53s)  kubelet          Node functional-363000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m53s (x8 over 2m53s)  kubelet          Node functional-363000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m53s (x7 over 2m53s)  kubelet          Node functional-363000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m49s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m49s                  kubelet          Node functional-363000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m49s                  kubelet          Node functional-363000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m49s                  kubelet          Node functional-363000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2m45s                  kubelet          Node functional-363000 status is now: NodeReady
	  Normal  RegisteredNode           2m45s                  node-controller  Node functional-363000 event: Registered Node functional-363000 in Controller
	  Normal  Starting                 2m21s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m21s (x8 over 2m21s)  kubelet          Node functional-363000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m21s (x8 over 2m21s)  kubelet          Node functional-363000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m21s (x7 over 2m21s)  kubelet          Node functional-363000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m14s                  node-controller  Node functional-363000 event: Registered Node functional-363000 in Controller
	  Normal  NodeHasNoDiskPressure    98s (x8 over 98s)      kubelet          Node functional-363000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  98s (x8 over 98s)      kubelet          Node functional-363000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 98s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     98s (x7 over 98s)      kubelet          Node functional-363000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  98s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           92s                    node-controller  Node functional-363000 event: Registered Node functional-363000 in Controller
	
	
	==> dmesg <==
	[  +0.059475] kauditd_printk_skb: 35 callbacks suppressed
	[ +10.026532] systemd-fstab-generator[5222]: Ignoring "noauto" option for root device
	[  +0.055006] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.111780] systemd-fstab-generator[5256]: Ignoring "noauto" option for root device
	[  +0.112627] systemd-fstab-generator[5282]: Ignoring "noauto" option for root device
	[  +0.107524] systemd-fstab-generator[5296]: Ignoring "noauto" option for root device
	[Aug14 16:18] kauditd_printk_skb: 91 callbacks suppressed
	[  +7.330732] systemd-fstab-generator[5917]: Ignoring "noauto" option for root device
	[  +0.090999] systemd-fstab-generator[5929]: Ignoring "noauto" option for root device
	[  +0.075716] systemd-fstab-generator[5941]: Ignoring "noauto" option for root device
	[  +0.094985] systemd-fstab-generator[5956]: Ignoring "noauto" option for root device
	[  +0.208163] systemd-fstab-generator[6125]: Ignoring "noauto" option for root device
	[  +1.137537] systemd-fstab-generator[6247]: Ignoring "noauto" option for root device
	[  +3.419964] kauditd_printk_skb: 199 callbacks suppressed
	[ +15.766516] kauditd_printk_skb: 34 callbacks suppressed
	[ +19.202257] systemd-fstab-generator[7393]: Ignoring "noauto" option for root device
	[  +5.082931] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.079226] kauditd_printk_skb: 19 callbacks suppressed
	[Aug14 16:19] kauditd_printk_skb: 13 callbacks suppressed
	[  +6.356935] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.030729] kauditd_printk_skb: 17 callbacks suppressed
	[ +10.310125] kauditd_printk_skb: 23 callbacks suppressed
	[  +6.871574] kauditd_printk_skb: 20 callbacks suppressed
	[  +6.209383] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.357815] kauditd_printk_skb: 31 callbacks suppressed
	
	
	==> etcd [52bc8cd2f696] <==
	{"level":"info","ts":"2024-08-14T16:17:31.446006Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-14T16:17:31.446074Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-08-14T16:17:31.446112Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-08-14T16:17:31.446129Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-08-14T16:17:31.446157Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-08-14T16:17:31.446198Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-08-14T16:17:31.449050Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-14T16:17:31.449075Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-363000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-14T16:17:31.449557Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-14T16:17:31.450027Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-14T16:17:31.450062Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-14T16:17:31.451603Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-14T16:17:31.451602Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-14T16:17:31.453779Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-14T16:17:31.456015Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-08-14T16:17:57.631737Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-14T16:17:57.631791Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-363000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-08-14T16:17:57.631832Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-14T16:17:57.631843Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-14T16:17:57.631868Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-14T16:17:57.631906Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-14T16:17:57.638499Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-08-14T16:17:57.639963Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-14T16:17:57.639999Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-14T16:17:57.640004Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-363000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> etcd [983d8166bb73] <==
	{"level":"info","ts":"2024-08-14T16:18:12.425874Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-08-14T16:18:12.425914Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T16:18:12.425954Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T16:18:12.427048Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-14T16:18:12.428680Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-14T16:18:12.428862Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-14T16:18:12.428976Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-14T16:18:12.429598Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-14T16:18:12.429575Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-14T16:18:13.521108Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-14T16:18:13.521156Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-14T16:18:13.521184Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-08-14T16:18:13.521203Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-08-14T16:18:13.521227Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-08-14T16:18:13.521242Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-08-14T16:18:13.521251Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-08-14T16:18:13.522198Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-363000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-14T16:18:13.522216Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-14T16:18:13.522380Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-14T16:18:13.522777Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-14T16:18:13.523385Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-14T16:18:13.523703Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-14T16:18:13.524119Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-08-14T16:18:13.533820Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-14T16:18:13.533844Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 16:19:49 up 3 min,  0 users,  load average: 0.61, 0.27, 0.11
	Linux functional-363000 5.10.207 #1 SMP PREEMPT Tue Aug 13 18:43:14 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a1bcebde6ec7] <==
	I0814 16:18:14.089732       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0814 16:18:14.093472       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0814 16:18:14.106208       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0814 16:18:14.106305       1 aggregator.go:171] initial CRD sync complete...
	I0814 16:18:14.106342       1 autoregister_controller.go:144] Starting autoregister controller
	I0814 16:18:14.106359       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0814 16:18:14.106375       1 cache.go:39] Caches are synced for autoregister controller
	I0814 16:18:14.127586       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0814 16:18:14.992748       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0814 16:18:15.192805       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.105.4]
	I0814 16:18:15.193359       1 controller.go:615] quota admission added evaluator for: endpoints
	I0814 16:18:15.198366       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0814 16:18:15.296390       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0814 16:18:15.301639       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0814 16:18:15.312777       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0814 16:18:15.320203       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0814 16:18:15.322347       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0814 16:18:55.076716       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.98.75.128"}
	I0814 16:19:00.154531       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.111.107.17"}
	I0814 16:19:11.517205       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0814 16:19:11.561688       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.97.123.222"}
	I0814 16:19:26.907005       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.102.20.203"}
	I0814 16:19:44.250090       1 controller.go:615] quota admission added evaluator for: namespaces
	I0814 16:19:44.342235       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.158.148"}
	I0814 16:19:44.350050       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.20.207"}
	
	
	==> kube-controller-manager [2e1b43a09f88] <==
	I0814 16:17:35.314414       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0814 16:17:35.315569       1 shared_informer.go:320] Caches are synced for PVC protection
	I0814 16:17:35.315581       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0814 16:17:35.316672       1 shared_informer.go:320] Caches are synced for GC
	I0814 16:17:35.316737       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0814 16:17:35.317770       1 shared_informer.go:320] Caches are synced for attach detach
	I0814 16:17:35.318855       1 shared_informer.go:320] Caches are synced for ephemeral
	I0814 16:17:35.339358       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0814 16:17:35.340552       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0814 16:17:35.340575       1 shared_informer.go:320] Caches are synced for PV protection
	I0814 16:17:35.340645       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0814 16:17:35.341239       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0814 16:17:35.341412       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0814 16:17:35.342414       1 shared_informer.go:320] Caches are synced for HPA
	I0814 16:17:35.414326       1 shared_informer.go:320] Caches are synced for cronjob
	I0814 16:17:35.489671       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0814 16:17:35.553609       1 shared_informer.go:320] Caches are synced for resource quota
	I0814 16:17:35.591642       1 shared_informer.go:320] Caches are synced for resource quota
	I0814 16:17:35.747315       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="430.547358ms"
	I0814 16:17:35.747372       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="25.425µs"
	I0814 16:17:35.968188       1 shared_informer.go:320] Caches are synced for garbage collector
	I0814 16:17:36.039430       1 shared_informer.go:320] Caches are synced for garbage collector
	I0814 16:17:36.039445       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0814 16:17:36.288427       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="13.614593ms"
	I0814 16:17:36.290896       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="36.761µs"
	
	
	==> kube-controller-manager [f64a473dc312] <==
	I0814 16:19:33.888025       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="28.669µs"
	I0814 16:19:40.754567       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="105.133µs"
	I0814 16:19:41.015662       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="37.461µs"
	I0814 16:19:44.278741       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="8.995815ms"
	E0814 16:19:44.278863       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0814 16:19:44.284206       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="3.262208ms"
	E0814 16:19:44.284526       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0814 16:19:44.290239       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="8.074536ms"
	E0814 16:19:44.290262       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0814 16:19:44.291640       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="4.963922ms"
	E0814 16:19:44.291657       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0814 16:19:44.299480       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="8.18296ms"
	E0814 16:19:44.299502       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0814 16:19:44.305002       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="3.536647ms"
	E0814 16:19:44.305105       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0814 16:19:44.317935       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="6.062798ms"
	I0814 16:19:44.323719       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="10.746907ms"
	I0814 16:19:44.326035       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="2.293842ms"
	I0814 16:19:44.326066       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="15.709µs"
	I0814 16:19:44.330769       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="115.509µs"
	I0814 16:19:44.339312       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="11.376µs"
	I0814 16:19:44.339672       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="21.612699ms"
	I0814 16:19:44.339755       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="30.335µs"
	I0814 16:19:44.719889       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="27.794µs"
	I0814 16:19:45.508208       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-363000"
	
	
	==> kube-proxy [7f9e0a243a0e] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0814 16:17:33.165019       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0814 16:17:33.168233       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0814 16:17:33.168294       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0814 16:17:33.176232       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0814 16:17:33.176245       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0814 16:17:33.176305       1 server_linux.go:169] "Using iptables Proxier"
	I0814 16:17:33.178030       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0814 16:17:33.178169       1 server.go:483] "Version info" version="v1.31.0"
	I0814 16:17:33.178179       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 16:17:33.178744       1 config.go:197] "Starting service config controller"
	I0814 16:17:33.178756       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0814 16:17:33.178771       1 config.go:104] "Starting endpoint slice config controller"
	I0814 16:17:33.178796       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0814 16:17:33.179062       1 config.go:326] "Starting node config controller"
	I0814 16:17:33.179095       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0814 16:17:33.278961       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0814 16:17:33.278969       1 shared_informer.go:320] Caches are synced for service config
	I0814 16:17:33.279162       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [c8fbdfad91ce] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0814 16:18:15.281633       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0814 16:18:15.285694       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0814 16:18:15.285723       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0814 16:18:15.297303       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0814 16:18:15.297370       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0814 16:18:15.297401       1 server_linux.go:169] "Using iptables Proxier"
	I0814 16:18:15.298270       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0814 16:18:15.298418       1 server.go:483] "Version info" version="v1.31.0"
	I0814 16:18:15.298676       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 16:18:15.299191       1 config.go:197] "Starting service config controller"
	I0814 16:18:15.299225       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0814 16:18:15.299253       1 config.go:104] "Starting endpoint slice config controller"
	I0814 16:18:15.299296       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0814 16:18:15.299534       1 config.go:326] "Starting node config controller"
	I0814 16:18:15.299556       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0814 16:18:15.400128       1 shared_informer.go:320] Caches are synced for node config
	I0814 16:18:15.400143       1 shared_informer.go:320] Caches are synced for service config
	I0814 16:18:15.400157       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [3101292bde72] <==
	I0814 16:17:29.909352       1 serving.go:386] Generated self-signed cert in-memory
	W0814 16:17:31.985514       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0814 16:17:31.985623       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0814 16:17:31.985685       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0814 16:17:31.985756       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0814 16:17:32.013976       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0814 16:17:32.013993       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 16:17:32.016012       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0814 16:17:32.016097       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0814 16:17:32.016133       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0814 16:17:32.016150       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0814 16:17:32.116887       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0814 16:17:57.637039       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [dc945b90a547] <==
	I0814 16:18:13.080242       1 serving.go:386] Generated self-signed cert in-memory
	W0814 16:18:14.003604       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0814 16:18:14.003664       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0814 16:18:14.003685       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0814 16:18:14.003715       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0814 16:18:14.044180       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0814 16:18:14.044250       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 16:18:14.045448       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0814 16:18:14.045479       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0814 16:18:14.045515       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0814 16:18:14.045542       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0814 16:18:14.146203       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 14 16:19:33 functional-363000 kubelet[6254]: I0814 16:19:33.714948    6254 scope.go:117] "RemoveContainer" containerID="87153cf0a3403a8911ea7b432c2ad1bdeeb30c91556d8f4ba0691dd448755d7c"
	Aug 14 16:19:33 functional-363000 kubelet[6254]: I0814 16:19:33.882193    6254 scope.go:117] "RemoveContainer" containerID="87153cf0a3403a8911ea7b432c2ad1bdeeb30c91556d8f4ba0691dd448755d7c"
	Aug 14 16:19:33 functional-363000 kubelet[6254]: I0814 16:19:33.882355    6254 scope.go:117] "RemoveContainer" containerID="88f4d2adcb2d0f9cc863e82c747138328c37c123de0b9d44f4364d3757b21e7f"
	Aug 14 16:19:33 functional-363000 kubelet[6254]: E0814 16:19:33.882435    6254 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-j2lg9_default(da46fb2f-6f02-4df1-9ea0-4c846b730267)\"" pod="default/hello-node-connect-65d86f57f4-j2lg9" podUID="da46fb2f-6f02-4df1-9ea0-4c846b730267"
	Aug 14 16:19:36 functional-363000 kubelet[6254]: I0814 16:19:36.224283    6254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/9a352448-e812-4073-a7d1-e480f6cb86d3-test-volume\") pod \"busybox-mount\" (UID: \"9a352448-e812-4073-a7d1-e480f6cb86d3\") " pod="default/busybox-mount"
	Aug 14 16:19:36 functional-363000 kubelet[6254]: I0814 16:19:36.224335    6254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k278n\" (UniqueName: \"kubernetes.io/projected/9a352448-e812-4073-a7d1-e480f6cb86d3-kube-api-access-k278n\") pod \"busybox-mount\" (UID: \"9a352448-e812-4073-a7d1-e480f6cb86d3\") " pod="default/busybox-mount"
	Aug 14 16:19:40 functional-363000 kubelet[6254]: I0814 16:19:40.159854    6254 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/9a352448-e812-4073-a7d1-e480f6cb86d3-test-volume\") pod \"9a352448-e812-4073-a7d1-e480f6cb86d3\" (UID: \"9a352448-e812-4073-a7d1-e480f6cb86d3\") "
	Aug 14 16:19:40 functional-363000 kubelet[6254]: I0814 16:19:40.159892    6254 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a352448-e812-4073-a7d1-e480f6cb86d3-test-volume" (OuterVolumeSpecName: "test-volume") pod "9a352448-e812-4073-a7d1-e480f6cb86d3" (UID: "9a352448-e812-4073-a7d1-e480f6cb86d3"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Aug 14 16:19:40 functional-363000 kubelet[6254]: I0814 16:19:40.159898    6254 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k278n\" (UniqueName: \"kubernetes.io/projected/9a352448-e812-4073-a7d1-e480f6cb86d3-kube-api-access-k278n\") pod \"9a352448-e812-4073-a7d1-e480f6cb86d3\" (UID: \"9a352448-e812-4073-a7d1-e480f6cb86d3\") "
	Aug 14 16:19:40 functional-363000 kubelet[6254]: I0814 16:19:40.159922    6254 reconciler_common.go:288] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/9a352448-e812-4073-a7d1-e480f6cb86d3-test-volume\") on node \"functional-363000\" DevicePath \"\""
	Aug 14 16:19:40 functional-363000 kubelet[6254]: I0814 16:19:40.162990    6254 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a352448-e812-4073-a7d1-e480f6cb86d3-kube-api-access-k278n" (OuterVolumeSpecName: "kube-api-access-k278n") pod "9a352448-e812-4073-a7d1-e480f6cb86d3" (UID: "9a352448-e812-4073-a7d1-e480f6cb86d3"). InnerVolumeSpecName "kube-api-access-k278n". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 14 16:19:40 functional-363000 kubelet[6254]: I0814 16:19:40.261087    6254 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-k278n\" (UniqueName: \"kubernetes.io/projected/9a352448-e812-4073-a7d1-e480f6cb86d3-kube-api-access-k278n\") on node \"functional-363000\" DevicePath \"\""
	Aug 14 16:19:40 functional-363000 kubelet[6254]: I0814 16:19:40.716437    6254 scope.go:117] "RemoveContainer" containerID="2b36969de5e47f4ddadb8e09459cb9af440f9843231c90a1636ae7ac54a16364"
	Aug 14 16:19:41 functional-363000 kubelet[6254]: I0814 16:19:41.007650    6254 scope.go:117] "RemoveContainer" containerID="2b36969de5e47f4ddadb8e09459cb9af440f9843231c90a1636ae7ac54a16364"
	Aug 14 16:19:41 functional-363000 kubelet[6254]: I0814 16:19:41.007939    6254 scope.go:117] "RemoveContainer" containerID="a2734e2fe64cc188275b9f3290f103da9156868e15ed04d692032ad241b8daa3"
	Aug 14 16:19:41 functional-363000 kubelet[6254]: E0814 16:19:41.008059    6254 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-lbmsm_default(37fcf8a6-1523-423e-9e21-eac41f1f779e)\"" pod="default/hello-node-64b4f8f9ff-lbmsm" podUID="37fcf8a6-1523-423e-9e21-eac41f1f779e"
	Aug 14 16:19:41 functional-363000 kubelet[6254]: I0814 16:19:41.018990    6254 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ab77e45ace1c0b308aed8e6cd7357138b76a5d5d84203802c20919fc7363590"
	Aug 14 16:19:44 functional-363000 kubelet[6254]: E0814 16:19:44.319975    6254 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9a352448-e812-4073-a7d1-e480f6cb86d3" containerName="mount-munger"
	Aug 14 16:19:44 functional-363000 kubelet[6254]: I0814 16:19:44.320003    6254 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a352448-e812-4073-a7d1-e480f6cb86d3" containerName="mount-munger"
	Aug 14 16:19:44 functional-363000 kubelet[6254]: I0814 16:19:44.487645    6254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/68e6e66b-1652-47bd-a078-08c37d3bbb38-tmp-volume\") pod \"dashboard-metrics-scraper-c5db448b4-kfsdt\" (UID: \"68e6e66b-1652-47bd-a078-08c37d3bbb38\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-kfsdt"
	Aug 14 16:19:44 functional-363000 kubelet[6254]: I0814 16:19:44.487697    6254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnbnc\" (UniqueName: \"kubernetes.io/projected/68e6e66b-1652-47bd-a078-08c37d3bbb38-kube-api-access-bnbnc\") pod \"dashboard-metrics-scraper-c5db448b4-kfsdt\" (UID: \"68e6e66b-1652-47bd-a078-08c37d3bbb38\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-kfsdt"
	Aug 14 16:19:44 functional-363000 kubelet[6254]: I0814 16:19:44.487717    6254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7df3a152-a728-424b-9848-239214f59778-tmp-volume\") pod \"kubernetes-dashboard-695b96c756-5x52z\" (UID: \"7df3a152-a728-424b-9848-239214f59778\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-5x52z"
	Aug 14 16:19:44 functional-363000 kubelet[6254]: I0814 16:19:44.487747    6254 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t99s8\" (UniqueName: \"kubernetes.io/projected/7df3a152-a728-424b-9848-239214f59778-kube-api-access-t99s8\") pod \"kubernetes-dashboard-695b96c756-5x52z\" (UID: \"7df3a152-a728-424b-9848-239214f59778\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-5x52z"
	Aug 14 16:19:44 functional-363000 kubelet[6254]: I0814 16:19:44.714414    6254 scope.go:117] "RemoveContainer" containerID="88f4d2adcb2d0f9cc863e82c747138328c37c123de0b9d44f4364d3757b21e7f"
	Aug 14 16:19:44 functional-363000 kubelet[6254]: E0814 16:19:44.714484    6254 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-j2lg9_default(da46fb2f-6f02-4df1-9ea0-4c846b730267)\"" pod="default/hello-node-connect-65d86f57f4-j2lg9" podUID="da46fb2f-6f02-4df1-9ea0-4c846b730267"
	
	
	==> storage-provisioner [dba18618b0c9] <==
	I0814 16:18:30.805175       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0814 16:18:30.809419       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0814 16:18:30.809572       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0814 16:18:48.220648       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0814 16:18:48.220842       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-363000_24c9c900-559d-4847-9438-86a1089ab0b9!
	I0814 16:18:48.222512       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f3cbaa75-dc92-48c5-a7dd-442f586a7efb", APIVersion:"v1", ResourceVersion:"639", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-363000_24c9c900-559d-4847-9438-86a1089ab0b9 became leader
	I0814 16:18:48.321631       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-363000_24c9c900-559d-4847-9438-86a1089ab0b9!
	I0814 16:19:06.374633       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0814 16:19:06.374684       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    dce35035-c9f4-4161-93eb-b68eafa4a7b5 362 0 2024-08-14 16:17:06 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-08-14 16:17:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-9442b8fa-a763-4b28-9841-39d1b7ae7a21 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  9442b8fa-a763-4b28-9841-39d1b7ae7a21 709 0 2024-08-14 16:19:06 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-08-14 16:19:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-08-14 16:19:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0814 16:19:06.375129       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-9442b8fa-a763-4b28-9841-39d1b7ae7a21" provisioned
	I0814 16:19:06.375171       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0814 16:19:06.375194       1 volume_store.go:212] Trying to save persistentvolume "pvc-9442b8fa-a763-4b28-9841-39d1b7ae7a21"
	I0814 16:19:06.375898       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"9442b8fa-a763-4b28-9841-39d1b7ae7a21", APIVersion:"v1", ResourceVersion:"709", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0814 16:19:06.381559       1 volume_store.go:219] persistentvolume "pvc-9442b8fa-a763-4b28-9841-39d1b7ae7a21" saved
	I0814 16:19:06.381598       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"9442b8fa-a763-4b28-9841-39d1b7ae7a21", APIVersion:"v1", ResourceVersion:"709", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-9442b8fa-a763-4b28-9841-39d1b7ae7a21
	
	
	==> storage-provisioner [fc679c3609f6] <==
	I0814 16:18:15.201091       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0814 16:18:15.201877       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-363000 -n functional-363000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-363000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount dashboard-metrics-scraper-c5db448b4-kfsdt kubernetes-dashboard-695b96c756-5x52z
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-363000 describe pod busybox-mount dashboard-metrics-scraper-c5db448b4-kfsdt kubernetes-dashboard-695b96c756-5x52z
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-363000 describe pod busybox-mount dashboard-metrics-scraper-c5db448b4-kfsdt kubernetes-dashboard-695b96c756-5x52z: exit status 1 (40.559ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-363000/192.168.105.4
	Start Time:       Wed, 14 Aug 2024 09:19:36 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://758ff334ba1789a735c7bd1c9d11b0abba1ed901703a80c79e93c851e0d0e084
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Wed, 14 Aug 2024 09:19:37 -0700
	      Finished:     Wed, 14 Aug 2024 09:19:38 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k278n (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-k278n:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  13s   default-scheduler  Successfully assigned default/busybox-mount to functional-363000
	  Normal  Pulling    14s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     13s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.22s (1.22s including waiting). Image size: 3547125 bytes.
	  Normal  Created    13s   kubelet            Created container mount-munger
	  Normal  Started    13s   kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-c5db448b4-kfsdt" not found
	Error from server (NotFound): pods "kubernetes-dashboard-695b96c756-5x52z" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-363000 describe pod busybox-mount dashboard-metrics-scraper-c5db448b4-kfsdt kubernetes-dashboard-695b96c756-5x52z: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (38.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (214.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 node stop m02 -v=7 --alsologtostderr
E0814 09:24:10.254856    1600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/functional-363000/client.crt: no such file or directory" logger="UnhandledError"
E0814 09:24:20.497966    1600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/functional-363000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-243000 node stop m02 -v=7 --alsologtostderr: (12.196259917s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 status -v=7 --alsologtostderr
E0814 09:24:40.980923    1600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/functional-363000/client.crt: no such file or directory" logger="UnhandledError"
E0814 09:25:21.943007    1600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/functional-363000/client.crt: no such file or directory" logger="UnhandledError"
E0814 09:26:43.863715    1600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/functional-363000/client.crt: no such file or directory" logger="UnhandledError"
E0814 09:27:01.196127    1600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/addons-937000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-243000 status -v=7 --alsologtostderr: exit status 7 (2m55.98145425s)

                                                
                                                
-- stdout --
	ha-243000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-243000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-243000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-243000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:24:20.685870    2728 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:24:20.686061    2728 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:24:20.686065    2728 out.go:304] Setting ErrFile to fd 2...
	I0814 09:24:20.686068    2728 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:24:20.686218    2728 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:24:20.686359    2728 out.go:298] Setting JSON to false
	I0814 09:24:20.686373    2728 mustload.go:65] Loading cluster: ha-243000
	I0814 09:24:20.686414    2728 notify.go:220] Checking for updates...
	I0814 09:24:20.686672    2728 config.go:182] Loaded profile config "ha-243000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:24:20.686680    2728 status.go:255] checking status of ha-243000 ...
	I0814 09:24:20.687566    2728 status.go:330] ha-243000 host status = "Running" (err=<nil>)
	I0814 09:24:20.687576    2728 host.go:66] Checking if "ha-243000" exists ...
	I0814 09:24:20.687691    2728 host.go:66] Checking if "ha-243000" exists ...
	I0814 09:24:20.687821    2728 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 09:24:20.687832    2728 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/ha-243000/id_rsa Username:docker}
	W0814 09:24:46.609811    2728 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0814 09:24:46.609902    2728 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0814 09:24:46.609915    2728 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0814 09:24:46.609920    2728 status.go:257] ha-243000 status: &{Name:ha-243000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0814 09:24:46.609933    2728 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0814 09:24:46.609937    2728 status.go:255] checking status of ha-243000-m02 ...
	I0814 09:24:46.610152    2728 status.go:330] ha-243000-m02 host status = "Stopped" (err=<nil>)
	I0814 09:24:46.610159    2728 status.go:343] host is not running, skipping remaining checks
	I0814 09:24:46.610161    2728 status.go:257] ha-243000-m02 status: &{Name:ha-243000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0814 09:24:46.610165    2728 status.go:255] checking status of ha-243000-m03 ...
	I0814 09:24:46.610896    2728 status.go:330] ha-243000-m03 host status = "Running" (err=<nil>)
	I0814 09:24:46.610903    2728 host.go:66] Checking if "ha-243000-m03" exists ...
	I0814 09:24:46.611001    2728 host.go:66] Checking if "ha-243000-m03" exists ...
	I0814 09:24:46.611123    2728 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 09:24:46.611131    2728 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/ha-243000-m03/id_rsa Username:docker}
	W0814 09:26:01.610152    2728 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0814 09:26:01.610199    2728 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0814 09:26:01.610214    2728 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0814 09:26:01.610217    2728 status.go:257] ha-243000-m03 status: &{Name:ha-243000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0814 09:26:01.610226    2728 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0814 09:26:01.610233    2728 status.go:255] checking status of ha-243000-m04 ...
	I0814 09:26:01.610886    2728 status.go:330] ha-243000-m04 host status = "Running" (err=<nil>)
	I0814 09:26:01.610894    2728 host.go:66] Checking if "ha-243000-m04" exists ...
	I0814 09:26:01.611011    2728 host.go:66] Checking if "ha-243000-m04" exists ...
	I0814 09:26:01.611129    2728 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 09:26:01.611135    2728 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/ha-243000-m04/id_rsa Username:docker}
	W0814 09:27:16.610673    2728 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0814 09:27:16.610719    2728 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0814 09:27:16.610728    2728 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0814 09:27:16.610734    2728 status.go:257] ha-243000-m04 status: &{Name:ha-243000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0814 09:27:16.610743    2728 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-243000 status -v=7 --alsologtostderr": ha-243000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-243000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-243000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-243000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-243000 status -v=7 --alsologtostderr": ha-243000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-243000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-243000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-243000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-243000 status -v=7 --alsologtostderr": ha-243000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-243000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-243000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-243000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-243000 -n ha-243000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-243000 -n ha-243000: exit status 3 (25.958530709s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 09:27:42.568745    2758 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0814 09:27:42.568753    2758 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-243000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (214.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (102.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m16.886463334s)
ha_test.go:413: expected profile "ha-243000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-243000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-243000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-243000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docke
r\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-243000 -n ha-243000
E0814 09:28:59.974086    1600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/functional-363000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-243000 -n ha-243000: exit status 3 (25.960310833s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 09:29:25.408974    2774 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0814 09:29:25.409020    2774 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-243000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (102.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (183.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 node start m02 -v=7 --alsologtostderr
E0814 09:29:27.701624    1600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/functional-363000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-243000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.111111625s)

                                                
                                                
-- stdout --
	* Starting "ha-243000-m02" control-plane node in "ha-243000" cluster
	* Restarting existing qemu2 VM for "ha-243000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-243000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:29:25.475168    2781 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:29:25.475493    2781 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:29:25.475498    2781 out.go:304] Setting ErrFile to fd 2...
	I0814 09:29:25.475501    2781 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:29:25.475663    2781 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:29:25.475986    2781 mustload.go:65] Loading cluster: ha-243000
	I0814 09:29:25.476278    2781 config.go:182] Loaded profile config "ha-243000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	W0814 09:29:25.476572    2781 host.go:58] "ha-243000-m02" host status: Stopped
	I0814 09:29:25.480057    2781 out.go:177] * Starting "ha-243000-m02" control-plane node in "ha-243000" cluster
	I0814 09:29:25.483951    2781 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0814 09:29:25.483968    2781 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0814 09:29:25.483975    2781 cache.go:56] Caching tarball of preloaded images
	I0814 09:29:25.484053    2781 preload.go:172] Found /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0814 09:29:25.484060    2781 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0814 09:29:25.484122    2781 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/ha-243000/config.json ...
	I0814 09:29:25.484705    2781 start.go:360] acquireMachinesLock for ha-243000-m02: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:29:25.484746    2781 start.go:364] duration metric: took 28.583µs to acquireMachinesLock for "ha-243000-m02"
	I0814 09:29:25.484754    2781 start.go:96] Skipping create...Using existing machine configuration
	I0814 09:29:25.484759    2781 fix.go:54] fixHost starting: m02
	I0814 09:29:25.484902    2781 fix.go:112] recreateIfNeeded on ha-243000-m02: state=Stopped err=<nil>
	W0814 09:29:25.484911    2781 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 09:29:25.488999    2781 out.go:177] * Restarting existing qemu2 VM for "ha-243000-m02" ...
	I0814 09:29:25.492965    2781 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:29:25.493007    2781 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/ha-243000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/ha-243000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/ha-243000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:1b:9b:60:77:6e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/ha-243000-m02/disk.qcow2
	I0814 09:29:25.495446    2781 main.go:141] libmachine: STDOUT: 
	I0814 09:29:25.495473    2781 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:29:25.495499    2781 fix.go:56] duration metric: took 10.740792ms for fixHost
	I0814 09:29:25.495502    2781 start.go:83] releasing machines lock for "ha-243000-m02", held for 10.752292ms
	W0814 09:29:25.495509    2781 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0814 09:29:25.495543    2781 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:29:25.495547    2781 start.go:729] Will try again in 5 seconds ...
	I0814 09:29:30.495737    2781 start.go:360] acquireMachinesLock for ha-243000-m02: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:29:30.495865    2781 start.go:364] duration metric: took 107.083µs to acquireMachinesLock for "ha-243000-m02"
	I0814 09:29:30.495898    2781 start.go:96] Skipping create...Using existing machine configuration
	I0814 09:29:30.495902    2781 fix.go:54] fixHost starting: m02
	I0814 09:29:30.496049    2781 fix.go:112] recreateIfNeeded on ha-243000-m02: state=Stopped err=<nil>
	W0814 09:29:30.496056    2781 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 09:29:30.500247    2781 out.go:177] * Restarting existing qemu2 VM for "ha-243000-m02" ...
	I0814 09:29:30.504293    2781 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:29:30.504335    2781 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/ha-243000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/ha-243000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/ha-243000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:1b:9b:60:77:6e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/ha-243000-m02/disk.qcow2
	I0814 09:29:30.506626    2781 main.go:141] libmachine: STDOUT: 
	I0814 09:29:30.506656    2781 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:29:30.506676    2781 fix.go:56] duration metric: took 10.775334ms for fixHost
	I0814 09:29:30.506682    2781 start.go:83] releasing machines lock for "ha-243000-m02", held for 10.811125ms
	W0814 09:29:30.506726    2781 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-243000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-243000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:29:30.510146    2781 out.go:177] 
	W0814 09:29:30.514275    2781 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0814 09:29:30.514280    2781 out.go:239] * 
	* 
	W0814 09:29:30.516000    2781 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 09:29:30.520256    2781 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0814 09:29:25.475168    2781 out.go:291] Setting OutFile to fd 1 ...
I0814 09:29:25.475493    2781 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0814 09:29:25.475498    2781 out.go:304] Setting ErrFile to fd 2...
I0814 09:29:25.475501    2781 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0814 09:29:25.475663    2781 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
I0814 09:29:25.475986    2781 mustload.go:65] Loading cluster: ha-243000
I0814 09:29:25.476278    2781 config.go:182] Loaded profile config "ha-243000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
W0814 09:29:25.476572    2781 host.go:58] "ha-243000-m02" host status: Stopped
I0814 09:29:25.480057    2781 out.go:177] * Starting "ha-243000-m02" control-plane node in "ha-243000" cluster
I0814 09:29:25.483951    2781 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
I0814 09:29:25.483968    2781 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
I0814 09:29:25.483975    2781 cache.go:56] Caching tarball of preloaded images
I0814 09:29:25.484053    2781 preload.go:172] Found /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0814 09:29:25.484060    2781 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
I0814 09:29:25.484122    2781 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/ha-243000/config.json ...
I0814 09:29:25.484705    2781 start.go:360] acquireMachinesLock for ha-243000-m02: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0814 09:29:25.484746    2781 start.go:364] duration metric: took 28.583µs to acquireMachinesLock for "ha-243000-m02"
I0814 09:29:25.484754    2781 start.go:96] Skipping create...Using existing machine configuration
I0814 09:29:25.484759    2781 fix.go:54] fixHost starting: m02
I0814 09:29:25.484902    2781 fix.go:112] recreateIfNeeded on ha-243000-m02: state=Stopped err=<nil>
W0814 09:29:25.484911    2781 fix.go:138] unexpected machine state, will restart: <nil>
I0814 09:29:25.488999    2781 out.go:177] * Restarting existing qemu2 VM for "ha-243000-m02" ...
I0814 09:29:25.492965    2781 qemu.go:418] Using hvf for hardware acceleration
I0814 09:29:25.493007    2781 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/ha-243000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/ha-243000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/ha-243000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:1b:9b:60:77:6e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/ha-243000-m02/disk.qcow2
I0814 09:29:25.495446    2781 main.go:141] libmachine: STDOUT: 
I0814 09:29:25.495473    2781 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0814 09:29:25.495499    2781 fix.go:56] duration metric: took 10.740792ms for fixHost
I0814 09:29:25.495502    2781 start.go:83] releasing machines lock for "ha-243000-m02", held for 10.752292ms
W0814 09:29:25.495509    2781 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0814 09:29:25.495543    2781 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0814 09:29:25.495547    2781 start.go:729] Will try again in 5 seconds ...
I0814 09:29:30.495737    2781 start.go:360] acquireMachinesLock for ha-243000-m02: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0814 09:29:30.495865    2781 start.go:364] duration metric: took 107.083µs to acquireMachinesLock for "ha-243000-m02"
I0814 09:29:30.495898    2781 start.go:96] Skipping create...Using existing machine configuration
I0814 09:29:30.495902    2781 fix.go:54] fixHost starting: m02
I0814 09:29:30.496049    2781 fix.go:112] recreateIfNeeded on ha-243000-m02: state=Stopped err=<nil>
W0814 09:29:30.496056    2781 fix.go:138] unexpected machine state, will restart: <nil>
I0814 09:29:30.500247    2781 out.go:177] * Restarting existing qemu2 VM for "ha-243000-m02" ...
I0814 09:29:30.504293    2781 qemu.go:418] Using hvf for hardware acceleration
I0814 09:29:30.504335    2781 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/ha-243000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/ha-243000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/ha-243000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:1b:9b:60:77:6e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/ha-243000-m02/disk.qcow2
I0814 09:29:30.506626    2781 main.go:141] libmachine: STDOUT: 
I0814 09:29:30.506656    2781 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0814 09:29:30.506676    2781 fix.go:56] duration metric: took 10.775334ms for fixHost
I0814 09:29:30.506682    2781 start.go:83] releasing machines lock for "ha-243000-m02", held for 10.811125ms
W0814 09:29:30.506726    2781 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-243000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-243000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0814 09:29:30.510146    2781 out.go:177] 
W0814 09:29:30.514275    2781 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0814 09:29:30.514280    2781 out.go:239] * 
* 
W0814 09:29:30.516000    2781 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0814 09:29:30.520256    2781 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-243000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 status -v=7 --alsologtostderr
E0814 09:32:01.181091    1600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/addons-937000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-243000 status -v=7 --alsologtostderr: exit status 7 (2m32.695567417s)

                                                
                                                
-- stdout --
	ha-243000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-243000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-243000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-243000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:29:30.555826    2785 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:29:30.556012    2785 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:29:30.556015    2785 out.go:304] Setting ErrFile to fd 2...
	I0814 09:29:30.556017    2785 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:29:30.556141    2785 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:29:30.556259    2785 out.go:298] Setting JSON to false
	I0814 09:29:30.556273    2785 mustload.go:65] Loading cluster: ha-243000
	I0814 09:29:30.556308    2785 notify.go:220] Checking for updates...
	I0814 09:29:30.556479    2785 config.go:182] Loaded profile config "ha-243000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:29:30.556485    2785 status.go:255] checking status of ha-243000 ...
	I0814 09:29:30.557157    2785 status.go:330] ha-243000 host status = "Running" (err=<nil>)
	I0814 09:29:30.557166    2785 host.go:66] Checking if "ha-243000" exists ...
	I0814 09:29:30.557265    2785 host.go:66] Checking if "ha-243000" exists ...
	I0814 09:29:30.557378    2785 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 09:29:30.557386    2785 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/ha-243000/id_rsa Username:docker}
	W0814 09:29:30.557562    2785 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0814 09:29:30.557577    2785 retry.go:31] will retry after 152.254963ms: dial tcp 192.168.105.5:22: connect: host is down
	W0814 09:29:30.711970    2785 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0814 09:29:30.711988    2785 retry.go:31] will retry after 357.150944ms: dial tcp 192.168.105.5:22: connect: host is down
	W0814 09:29:31.071327    2785 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0814 09:29:31.071354    2785 retry.go:31] will retry after 570.42762ms: dial tcp 192.168.105.5:22: connect: host is down
	W0814 09:29:31.643938    2785 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0814 09:29:31.643997    2785 retry.go:31] will retry after 321.219123ms: new client: new client: dial tcp 192.168.105.5:22: connect: host is down
	I0814 09:29:31.967254    2785 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/ha-243000/id_rsa Username:docker}
	W0814 09:29:31.967584    2785 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0814 09:29:31.967598    2785 retry.go:31] will retry after 248.884106ms: dial tcp 192.168.105.5:22: connect: host is down
	W0814 09:29:32.218624    2785 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0814 09:29:32.218641    2785 retry.go:31] will retry after 270.36468ms: dial tcp 192.168.105.5:22: connect: host is down
	W0814 09:29:32.491187    2785 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0814 09:29:32.491209    2785 retry.go:31] will retry after 714.308225ms: dial tcp 192.168.105.5:22: connect: host is down
	W0814 09:29:33.207642    2785 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	W0814 09:29:33.207689    2785 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: host is down
	E0814 09:29:33.207702    2785 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: host is down
	I0814 09:29:33.207706    2785 status.go:257] ha-243000 status: &{Name:ha-243000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0814 09:29:33.207715    2785 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: host is down
	I0814 09:29:33.207720    2785 status.go:255] checking status of ha-243000-m02 ...
	I0814 09:29:33.207895    2785 status.go:330] ha-243000-m02 host status = "Stopped" (err=<nil>)
	I0814 09:29:33.207900    2785 status.go:343] host is not running, skipping remaining checks
	I0814 09:29:33.207902    2785 status.go:257] ha-243000-m02 status: &{Name:ha-243000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0814 09:29:33.207907    2785 status.go:255] checking status of ha-243000-m03 ...
	I0814 09:29:33.208491    2785 status.go:330] ha-243000-m03 host status = "Running" (err=<nil>)
	I0814 09:29:33.208497    2785 host.go:66] Checking if "ha-243000-m03" exists ...
	I0814 09:29:33.208624    2785 host.go:66] Checking if "ha-243000-m03" exists ...
	I0814 09:29:33.208750    2785 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 09:29:33.208756    2785 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/ha-243000-m03/id_rsa Username:docker}
	W0814 09:30:48.209098    2785 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0814 09:30:48.209276    2785 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0814 09:30:48.209452    2785 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0814 09:30:48.209478    2785 status.go:257] ha-243000-m03 status: &{Name:ha-243000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0814 09:30:48.209522    2785 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0814 09:30:48.209543    2785 status.go:255] checking status of ha-243000-m04 ...
	I0814 09:30:48.212602    2785 status.go:330] ha-243000-m04 host status = "Running" (err=<nil>)
	I0814 09:30:48.212631    2785 host.go:66] Checking if "ha-243000-m04" exists ...
	I0814 09:30:48.213147    2785 host.go:66] Checking if "ha-243000-m04" exists ...
	I0814 09:30:48.213726    2785 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 09:30:48.213775    2785 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/ha-243000-m04/id_rsa Username:docker}
	W0814 09:32:03.209372    2785 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0814 09:32:03.209430    2785 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0814 09:32:03.209439    2785 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0814 09:32:03.209443    2785 status.go:257] ha-243000-m04 status: &{Name:ha-243000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0814 09:32:03.209454    2785 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-243000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-243000 -n ha-243000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-243000 -n ha-243000: exit status 3 (25.963256541s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 09:32:29.167785    3122 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0814 09:32:29.167819    3122 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-243000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (183.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-243000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-243000 -v=7 --alsologtostderr
E0814 09:33:59.958107    1600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/functional-363000/client.crt: no such file or directory" logger="UnhandledError"
E0814 09:37:01.170296    1600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/addons-937000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-243000 -v=7 --alsologtostderr: (3m49.01720875s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-243000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-243000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.217820458s)

                                                
                                                
-- stdout --
	* [ha-243000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19446
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-243000" primary control-plane node in "ha-243000" cluster
	* Restarting existing qemu2 VM for "ha-243000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-243000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:37:36.601472    3202 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:37:36.601674    3202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:37:36.601678    3202 out.go:304] Setting ErrFile to fd 2...
	I0814 09:37:36.601681    3202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:37:36.601827    3202 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:37:36.602992    3202 out.go:298] Setting JSON to false
	I0814 09:37:36.622267    3202 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2213,"bootTime":1723651243,"procs":501,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0814 09:37:36.622331    3202 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0814 09:37:36.627313    3202 out.go:177] * [ha-243000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0814 09:37:36.635337    3202 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 09:37:36.635376    3202 notify.go:220] Checking for updates...
	I0814 09:37:36.641258    3202 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	I0814 09:37:36.644187    3202 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0814 09:37:36.647263    3202 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 09:37:36.650244    3202 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	I0814 09:37:36.651662    3202 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 09:37:36.654621    3202 config.go:182] Loaded profile config "ha-243000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:37:36.654675    3202 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 09:37:36.659221    3202 out.go:177] * Using the qemu2 driver based on existing profile
	I0814 09:37:36.664167    3202 start.go:297] selected driver: qemu2
	I0814 09:37:36.664173    3202 start.go:901] validating driver "qemu2" against &{Name:ha-243000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.0 ClusterName:ha-243000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 09:37:36.664244    3202 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 09:37:36.666855    3202 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 09:37:36.666899    3202 cni.go:84] Creating CNI manager for ""
	I0814 09:37:36.666904    3202 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0814 09:37:36.666972    3202 start.go:340] cluster config:
	{Name:ha-243000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-243000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 09:37:36.671183    3202 iso.go:125] acquiring lock: {Name:mkda50f549908e1040ce4e19ea38376a2f640f3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:37:36.680231    3202 out.go:177] * Starting "ha-243000" primary control-plane node in "ha-243000" cluster
	I0814 09:37:36.684221    3202 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0814 09:37:36.684239    3202 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0814 09:37:36.684252    3202 cache.go:56] Caching tarball of preloaded images
	I0814 09:37:36.684326    3202 preload.go:172] Found /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0814 09:37:36.684333    3202 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0814 09:37:36.684430    3202 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/ha-243000/config.json ...
	I0814 09:37:36.684871    3202 start.go:360] acquireMachinesLock for ha-243000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:37:36.684909    3202 start.go:364] duration metric: took 31.292µs to acquireMachinesLock for "ha-243000"
	I0814 09:37:36.684919    3202 start.go:96] Skipping create...Using existing machine configuration
	I0814 09:37:36.684924    3202 fix.go:54] fixHost starting: 
	I0814 09:37:36.685057    3202 fix.go:112] recreateIfNeeded on ha-243000: state=Stopped err=<nil>
	W0814 09:37:36.685064    3202 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 09:37:36.688247    3202 out.go:177] * Restarting existing qemu2 VM for "ha-243000" ...
	I0814 09:37:36.696212    3202 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:37:36.696248    3202 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/ha-243000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/ha-243000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/ha-243000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:b3:0c:a3:98:9a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/ha-243000/disk.qcow2
	I0814 09:37:36.698350    3202 main.go:141] libmachine: STDOUT: 
	I0814 09:37:36.698370    3202 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:37:36.698405    3202 fix.go:56] duration metric: took 13.479791ms for fixHost
	I0814 09:37:36.698409    3202 start.go:83] releasing machines lock for "ha-243000", held for 13.495625ms
	W0814 09:37:36.698415    3202 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0814 09:37:36.698457    3202 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:37:36.698462    3202 start.go:729] Will try again in 5 seconds ...
	I0814 09:37:41.700425    3202 start.go:360] acquireMachinesLock for ha-243000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:37:41.700815    3202 start.go:364] duration metric: took 272.792µs to acquireMachinesLock for "ha-243000"
	I0814 09:37:41.700939    3202 start.go:96] Skipping create...Using existing machine configuration
	I0814 09:37:41.700958    3202 fix.go:54] fixHost starting: 
	I0814 09:37:41.701579    3202 fix.go:112] recreateIfNeeded on ha-243000: state=Stopped err=<nil>
	W0814 09:37:41.701608    3202 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 09:37:41.708905    3202 out.go:177] * Restarting existing qemu2 VM for "ha-243000" ...
	I0814 09:37:41.712880    3202 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:37:41.713064    3202 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/ha-243000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/ha-243000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/ha-243000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:b3:0c:a3:98:9a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/ha-243000/disk.qcow2
	I0814 09:37:41.722069    3202 main.go:141] libmachine: STDOUT: 
	I0814 09:37:41.722136    3202 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:37:41.722199    3202 fix.go:56] duration metric: took 21.243833ms for fixHost
	I0814 09:37:41.722212    3202 start.go:83] releasing machines lock for "ha-243000", held for 21.373291ms
	W0814 09:37:41.722416    3202 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-243000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-243000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:37:41.729926    3202 out.go:177] 
	W0814 09:37:41.733963    3202 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0814 09:37:41.733993    3202 out.go:239] * 
	* 
	W0814 09:37:41.736386    3202 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 09:37:41.748857    3202 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-243000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-243000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-243000 -n ha-243000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-243000 -n ha-243000: exit status 7 (33.242375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-243000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-243000 node delete m03 -v=7 --alsologtostderr: exit status 83 (39.565042ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-243000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-243000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:37:41.892904    3215 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:37:41.893155    3215 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:37:41.893159    3215 out.go:304] Setting ErrFile to fd 2...
	I0814 09:37:41.893161    3215 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:37:41.893282    3215 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:37:41.893522    3215 mustload.go:65] Loading cluster: ha-243000
	I0814 09:37:41.893754    3215 config.go:182] Loaded profile config "ha-243000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	W0814 09:37:41.894110    3215 out.go:239] ! The control-plane node ha-243000 host is not running (will try others): state=Stopped
	! The control-plane node ha-243000 host is not running (will try others): state=Stopped
	W0814 09:37:41.894223    3215 out.go:239] ! The control-plane node ha-243000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-243000-m02 host is not running (will try others): state=Stopped
	I0814 09:37:41.898560    3215 out.go:177] * The control-plane node ha-243000-m03 host is not running: state=Stopped
	I0814 09:37:41.901631    3215 out.go:177]   To start a cluster, run: "minikube start -p ha-243000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-243000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-243000 status -v=7 --alsologtostderr: exit status 7 (30.854084ms)

                                                
                                                
-- stdout --
	ha-243000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-243000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-243000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-243000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:37:41.932658    3217 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:37:41.932801    3217 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:37:41.932804    3217 out.go:304] Setting ErrFile to fd 2...
	I0814 09:37:41.932807    3217 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:37:41.932936    3217 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:37:41.933055    3217 out.go:298] Setting JSON to false
	I0814 09:37:41.933067    3217 mustload.go:65] Loading cluster: ha-243000
	I0814 09:37:41.933133    3217 notify.go:220] Checking for updates...
	I0814 09:37:41.933336    3217 config.go:182] Loaded profile config "ha-243000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:37:41.933342    3217 status.go:255] checking status of ha-243000 ...
	I0814 09:37:41.933546    3217 status.go:330] ha-243000 host status = "Stopped" (err=<nil>)
	I0814 09:37:41.933550    3217 status.go:343] host is not running, skipping remaining checks
	I0814 09:37:41.933552    3217 status.go:257] ha-243000 status: &{Name:ha-243000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0814 09:37:41.933561    3217 status.go:255] checking status of ha-243000-m02 ...
	I0814 09:37:41.933650    3217 status.go:330] ha-243000-m02 host status = "Stopped" (err=<nil>)
	I0814 09:37:41.933653    3217 status.go:343] host is not running, skipping remaining checks
	I0814 09:37:41.933655    3217 status.go:257] ha-243000-m02 status: &{Name:ha-243000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0814 09:37:41.933659    3217 status.go:255] checking status of ha-243000-m03 ...
	I0814 09:37:41.933743    3217 status.go:330] ha-243000-m03 host status = "Stopped" (err=<nil>)
	I0814 09:37:41.933745    3217 status.go:343] host is not running, skipping remaining checks
	I0814 09:37:41.933746    3217 status.go:257] ha-243000-m03 status: &{Name:ha-243000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0814 09:37:41.933750    3217 status.go:255] checking status of ha-243000-m04 ...
	I0814 09:37:41.933844    3217 status.go:330] ha-243000-m04 host status = "Stopped" (err=<nil>)
	I0814 09:37:41.933847    3217 status.go:343] host is not running, skipping remaining checks
	I0814 09:37:41.933849    3217 status.go:257] ha-243000-m04 status: &{Name:ha-243000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-243000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-243000 -n ha-243000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-243000 -n ha-243000: exit status 7 (30.496709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-243000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1.018971125s)
ha_test.go:413: expected profile "ha-243000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-243000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-243000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-243000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-243000 -n ha-243000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-243000 -n ha-243000: exit status 7 (51.986208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-243000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (202.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 stop -v=7 --alsologtostderr
E0814 09:38:59.947661    1600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/functional-363000/client.crt: no such file or directory" logger="UnhandledError"
E0814 09:40:23.037631    1600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/functional-363000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-243000 stop -v=7 --alsologtostderr: (3m21.991338542s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-243000 status -v=7 --alsologtostderr: exit status 7 (63.715458ms)

                                                
                                                
-- stdout --
	ha-243000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-243000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-243000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-243000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:41:05.082625    3289 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:41:05.082818    3289 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:41:05.082822    3289 out.go:304] Setting ErrFile to fd 2...
	I0814 09:41:05.082825    3289 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:41:05.082991    3289 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:41:05.083151    3289 out.go:298] Setting JSON to false
	I0814 09:41:05.083166    3289 mustload.go:65] Loading cluster: ha-243000
	I0814 09:41:05.083208    3289 notify.go:220] Checking for updates...
	I0814 09:41:05.083473    3289 config.go:182] Loaded profile config "ha-243000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:41:05.083480    3289 status.go:255] checking status of ha-243000 ...
	I0814 09:41:05.083801    3289 status.go:330] ha-243000 host status = "Stopped" (err=<nil>)
	I0814 09:41:05.083807    3289 status.go:343] host is not running, skipping remaining checks
	I0814 09:41:05.083810    3289 status.go:257] ha-243000 status: &{Name:ha-243000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0814 09:41:05.083822    3289 status.go:255] checking status of ha-243000-m02 ...
	I0814 09:41:05.083951    3289 status.go:330] ha-243000-m02 host status = "Stopped" (err=<nil>)
	I0814 09:41:05.083956    3289 status.go:343] host is not running, skipping remaining checks
	I0814 09:41:05.083958    3289 status.go:257] ha-243000-m02 status: &{Name:ha-243000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0814 09:41:05.083963    3289 status.go:255] checking status of ha-243000-m03 ...
	I0814 09:41:05.084095    3289 status.go:330] ha-243000-m03 host status = "Stopped" (err=<nil>)
	I0814 09:41:05.084099    3289 status.go:343] host is not running, skipping remaining checks
	I0814 09:41:05.084102    3289 status.go:257] ha-243000-m03 status: &{Name:ha-243000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0814 09:41:05.084107    3289 status.go:255] checking status of ha-243000-m04 ...
	I0814 09:41:05.084231    3289 status.go:330] ha-243000-m04 host status = "Stopped" (err=<nil>)
	I0814 09:41:05.084234    3289 status.go:343] host is not running, skipping remaining checks
	I0814 09:41:05.084237    3289 status.go:257] ha-243000-m04 status: &{Name:ha-243000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-243000 status -v=7 --alsologtostderr": ha-243000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-243000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-243000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-243000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-243000 status -v=7 --alsologtostderr": ha-243000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-243000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-243000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-243000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-243000 status -v=7 --alsologtostderr": ha-243000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-243000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-243000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-243000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-243000 -n ha-243000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-243000 -n ha-243000: exit status 7 (33.097667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-243000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (202.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-243000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-243000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.176692541s)

                                                
                                                
-- stdout --
	* [ha-243000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19446
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-243000" primary control-plane node in "ha-243000" cluster
	* Restarting existing qemu2 VM for "ha-243000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-243000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:41:05.146605    3293 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:41:05.146720    3293 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:41:05.146723    3293 out.go:304] Setting ErrFile to fd 2...
	I0814 09:41:05.146726    3293 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:41:05.146836    3293 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:41:05.147877    3293 out.go:298] Setting JSON to false
	I0814 09:41:05.164112    3293 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2422,"bootTime":1723651243,"procs":498,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0814 09:41:05.164171    3293 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0814 09:41:05.168282    3293 out.go:177] * [ha-243000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0814 09:41:05.175166    3293 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 09:41:05.175237    3293 notify.go:220] Checking for updates...
	I0814 09:41:05.183010    3293 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	I0814 09:41:05.186160    3293 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0814 09:41:05.189203    3293 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 09:41:05.192207    3293 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	I0814 09:41:05.195160    3293 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 09:41:05.198503    3293 config.go:182] Loaded profile config "ha-243000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:41:05.198777    3293 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 09:41:05.202161    3293 out.go:177] * Using the qemu2 driver based on existing profile
	I0814 09:41:05.209204    3293 start.go:297] selected driver: qemu2
	I0814 09:41:05.209212    3293 start.go:901] validating driver "qemu2" against &{Name:ha-243000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.0 ClusterName:ha-243000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 09:41:05.209298    3293 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 09:41:05.211607    3293 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 09:41:05.211656    3293 cni.go:84] Creating CNI manager for ""
	I0814 09:41:05.211662    3293 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0814 09:41:05.211720    3293 start.go:340] cluster config:
	{Name:ha-243000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-243000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 09:41:05.215637    3293 iso.go:125] acquiring lock: {Name:mkda50f549908e1040ce4e19ea38376a2f640f3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:41:05.224092    3293 out.go:177] * Starting "ha-243000" primary control-plane node in "ha-243000" cluster
	I0814 09:41:05.228205    3293 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0814 09:41:05.228225    3293 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0814 09:41:05.228233    3293 cache.go:56] Caching tarball of preloaded images
	I0814 09:41:05.228290    3293 preload.go:172] Found /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0814 09:41:05.228296    3293 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0814 09:41:05.228373    3293 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/ha-243000/config.json ...
	I0814 09:41:05.228772    3293 start.go:360] acquireMachinesLock for ha-243000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:41:05.228810    3293 start.go:364] duration metric: took 31.125µs to acquireMachinesLock for "ha-243000"
	I0814 09:41:05.228820    3293 start.go:96] Skipping create...Using existing machine configuration
	I0814 09:41:05.228827    3293 fix.go:54] fixHost starting: 
	I0814 09:41:05.228950    3293 fix.go:112] recreateIfNeeded on ha-243000: state=Stopped err=<nil>
	W0814 09:41:05.228959    3293 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 09:41:05.232144    3293 out.go:177] * Restarting existing qemu2 VM for "ha-243000" ...
	I0814 09:41:05.240122    3293 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:41:05.240155    3293 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/ha-243000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/ha-243000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/ha-243000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:b3:0c:a3:98:9a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/ha-243000/disk.qcow2
	I0814 09:41:05.242212    3293 main.go:141] libmachine: STDOUT: 
	I0814 09:41:05.242239    3293 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:41:05.242269    3293 fix.go:56] duration metric: took 13.441958ms for fixHost
	I0814 09:41:05.242275    3293 start.go:83] releasing machines lock for "ha-243000", held for 13.461041ms
	W0814 09:41:05.242281    3293 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0814 09:41:05.242318    3293 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:41:05.242323    3293 start.go:729] Will try again in 5 seconds ...
	I0814 09:41:10.243194    3293 start.go:360] acquireMachinesLock for ha-243000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:41:10.243607    3293 start.go:364] duration metric: took 268.708µs to acquireMachinesLock for "ha-243000"
	I0814 09:41:10.243713    3293 start.go:96] Skipping create...Using existing machine configuration
	I0814 09:41:10.243727    3293 fix.go:54] fixHost starting: 
	I0814 09:41:10.244322    3293 fix.go:112] recreateIfNeeded on ha-243000: state=Stopped err=<nil>
	W0814 09:41:10.244339    3293 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 09:41:10.252632    3293 out.go:177] * Restarting existing qemu2 VM for "ha-243000" ...
	I0814 09:41:10.257536    3293 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:41:10.257688    3293 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/ha-243000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/ha-243000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/ha-243000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:b3:0c:a3:98:9a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/ha-243000/disk.qcow2
	I0814 09:41:10.264964    3293 main.go:141] libmachine: STDOUT: 
	I0814 09:41:10.265015    3293 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:41:10.265081    3293 fix.go:56] duration metric: took 21.353958ms for fixHost
	I0814 09:41:10.265100    3293 start.go:83] releasing machines lock for "ha-243000", held for 21.459292ms
	W0814 09:41:10.265243    3293 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-243000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-243000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:41:10.272634    3293 out.go:177] 
	W0814 09:41:10.276678    3293 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0814 09:41:10.276743    3293 out.go:239] * 
	* 
	W0814 09:41:10.279513    3293 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 09:41:10.286689    3293 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-243000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-243000 -n ha-243000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-243000 -n ha-243000: exit status 7 (67.96175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-243000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-243000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-243000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-243000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-243000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-243000 -n ha-243000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-243000 -n ha-243000: exit status 7 (31.305625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-243000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-243000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-243000 --control-plane -v=7 --alsologtostderr: exit status 83 (41.350166ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-243000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-243000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:41:10.473034    3310 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:41:10.473187    3310 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:41:10.473191    3310 out.go:304] Setting ErrFile to fd 2...
	I0814 09:41:10.473193    3310 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:41:10.473336    3310 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:41:10.473580    3310 mustload.go:65] Loading cluster: ha-243000
	I0814 09:41:10.473800    3310 config.go:182] Loaded profile config "ha-243000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	W0814 09:41:10.474134    3310 out.go:239] ! The control-plane node ha-243000 host is not running (will try others): state=Stopped
	! The control-plane node ha-243000 host is not running (will try others): state=Stopped
	W0814 09:41:10.474237    3310 out.go:239] ! The control-plane node ha-243000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-243000-m02 host is not running (will try others): state=Stopped
	I0814 09:41:10.477031    3310 out.go:177] * The control-plane node ha-243000-m03 host is not running: state=Stopped
	I0814 09:41:10.480847    3310 out.go:177]   To start a cluster, run: "minikube start -p ha-243000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-243000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-243000 -n ha-243000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-243000 -n ha-243000: exit status 7 (29.7875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-243000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.95s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-696000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-696000 --driver=qemu2 : exit status 80 (9.881066625s)

                                                
                                                
-- stdout --
	* [image-696000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19446
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-696000" primary control-plane node in "image-696000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-696000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-696000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-696000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-696000 -n image-696000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-696000 -n image-696000: exit status 7 (71.092209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-696000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.95s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.89s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-079000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-079000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.893426125s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"474b987e-c944-46cf-9a50-2d2256b69d83","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-079000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9f33e2e5-5010-4b86-b49c-cc969781b3f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19446"}}
	{"specversion":"1.0","id":"3ab081fb-6497-4d28-a695-d3ad2e40fec9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig"}}
	{"specversion":"1.0","id":"8d669095-92e9-4ea4-9bdd-0bde53ed4b1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"e064a7d3-ed08-4d50-bf35-e6d4e44cd3f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4fa95782-a2ff-4140-8e44-f6a8a94b0cb4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube"}}
	{"specversion":"1.0","id":"047a68ed-4d58-4f5d-a96c-91423feb09c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"016f0401-676f-433f-b9b0-9a49015a7435","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"1de26936-07a2-44be-8511-5a3440967163","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"2ce7b96d-655d-4a75-af60-b37fb4aba0ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-079000\" primary control-plane node in \"json-output-079000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"7656b325-4b90-4540-8adb-4d35880ba70b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"22ae9e56-98f4-40f2-a964-4238c6b754a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-079000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"41799c35-18bc-4bad-9f6d-ca92c9657004","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"d41fefdd-d491-435c-bc38-77364ee4c077","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"0b64f769-e83c-4487-a41f-effb805062e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-079000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"eb3c17b7-5d11-4cef-8c2b-f560f51ec925","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"4c8973d6-949e-46fc-b017-14d9b031b00c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-079000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.89s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-079000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-079000 --output=json --user=testUser: exit status 83 (75.0395ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2c120ddf-c9d2-4c38-b544-64b802fa2cb2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-079000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"6f1b9097-303c-43e7-b71c-e52a3a6a5278","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-079000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-079000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-079000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-079000 --output=json --user=testUser: exit status 83 (45.264833ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-079000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-079000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-079000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-079000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.05s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-547000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-547000 --driver=qemu2 : exit status 80 (9.746517875s)

                                                
                                                
-- stdout --
	* [first-547000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19446
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-547000" primary control-plane node in "first-547000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-547000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-547000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-547000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-08-14 09:41:42.99029 -0700 PDT m=+1957.137000543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-549000 -n second-549000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-549000 -n second-549000: exit status 85 (80.277666ms)

                                                
                                                
-- stdout --
	* Profile "second-549000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-549000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-549000" host is not running, skipping log retrieval (state="* Profile \"second-549000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-549000\"")
helpers_test.go:175: Cleaning up "second-549000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-549000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-08-14 09:41:43.180418 -0700 PDT m=+1957.327135209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-547000 -n first-547000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-547000 -n first-547000: exit status 7 (30.350458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-547000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-547000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-547000
--- FAIL: TestMinikubeProfile (10.05s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.93s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-697000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-697000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.8553675s)

                                                
                                                
-- stdout --
	* [mount-start-1-697000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19446
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-697000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-697000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-697000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-697000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-697000 -n mount-start-1-697000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-697000 -n mount-start-1-697000: exit status 7 (70.686875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-697000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (9.93s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (10s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-157000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
E0814 09:42:01.159774    1600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/addons-937000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-157000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.938748083s)

                                                
                                                
-- stdout --
	* [multinode-157000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19446
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-157000" primary control-plane node in "multinode-157000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-157000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:41:53.436332    3454 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:41:53.436477    3454 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:41:53.436480    3454 out.go:304] Setting ErrFile to fd 2...
	I0814 09:41:53.436483    3454 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:41:53.436595    3454 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:41:53.437744    3454 out.go:298] Setting JSON to false
	I0814 09:41:53.453761    3454 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2470,"bootTime":1723651243,"procs":498,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0814 09:41:53.453824    3454 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0814 09:41:53.461245    3454 out.go:177] * [multinode-157000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0814 09:41:53.469222    3454 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 09:41:53.469271    3454 notify.go:220] Checking for updates...
	I0814 09:41:53.477219    3454 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	I0814 09:41:53.480258    3454 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0814 09:41:53.483237    3454 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 09:41:53.486212    3454 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	I0814 09:41:53.489257    3454 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 09:41:53.492326    3454 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 09:41:53.496142    3454 out.go:177] * Using the qemu2 driver based on user configuration
	I0814 09:41:53.503270    3454 start.go:297] selected driver: qemu2
	I0814 09:41:53.503279    3454 start.go:901] validating driver "qemu2" against <nil>
	I0814 09:41:53.503287    3454 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 09:41:53.505684    3454 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0814 09:41:53.509189    3454 out.go:177] * Automatically selected the socket_vmnet network
	I0814 09:41:53.512363    3454 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 09:41:53.512389    3454 cni.go:84] Creating CNI manager for ""
	I0814 09:41:53.512394    3454 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0814 09:41:53.512398    3454 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0814 09:41:53.512431    3454 start.go:340] cluster config:
	{Name:multinode-157000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-157000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 09:41:53.516193    3454 iso.go:125] acquiring lock: {Name:mkda50f549908e1040ce4e19ea38376a2f640f3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:41:53.524232    3454 out.go:177] * Starting "multinode-157000" primary control-plane node in "multinode-157000" cluster
	I0814 09:41:53.528185    3454 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0814 09:41:53.528203    3454 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0814 09:41:53.528211    3454 cache.go:56] Caching tarball of preloaded images
	I0814 09:41:53.528277    3454 preload.go:172] Found /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0814 09:41:53.528283    3454 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0814 09:41:53.528495    3454 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/multinode-157000/config.json ...
	I0814 09:41:53.528507    3454 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/multinode-157000/config.json: {Name:mk96982c40d25777e51866cdfc9ff8e5ddf3bf33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:41:53.528790    3454 start.go:360] acquireMachinesLock for multinode-157000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:41:53.528834    3454 start.go:364] duration metric: took 37.5µs to acquireMachinesLock for "multinode-157000"
	I0814 09:41:53.528847    3454 start.go:93] Provisioning new machine with config: &{Name:multinode-157000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0 ClusterName:multinode-157000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0814 09:41:53.528882    3454 start.go:125] createHost starting for "" (driver="qemu2")
	I0814 09:41:53.537223    3454 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0814 09:41:53.555528    3454 start.go:159] libmachine.API.Create for "multinode-157000" (driver="qemu2")
	I0814 09:41:53.555555    3454 client.go:168] LocalClient.Create starting
	I0814 09:41:53.555620    3454 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem
	I0814 09:41:53.555654    3454 main.go:141] libmachine: Decoding PEM data...
	I0814 09:41:53.555664    3454 main.go:141] libmachine: Parsing certificate...
	I0814 09:41:53.555700    3454 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/cert.pem
	I0814 09:41:53.555725    3454 main.go:141] libmachine: Decoding PEM data...
	I0814 09:41:53.555733    3454 main.go:141] libmachine: Parsing certificate...
	I0814 09:41:53.556089    3454 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19446-1067/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0814 09:41:53.706127    3454 main.go:141] libmachine: Creating SSH key...
	I0814 09:41:53.831049    3454 main.go:141] libmachine: Creating Disk image...
	I0814 09:41:53.831054    3454 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0814 09:41:53.831224    3454 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/multinode-157000/disk.qcow2.raw /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/multinode-157000/disk.qcow2
	I0814 09:41:53.840200    3454 main.go:141] libmachine: STDOUT: 
	I0814 09:41:53.840221    3454 main.go:141] libmachine: STDERR: 
	I0814 09:41:53.840275    3454 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/multinode-157000/disk.qcow2 +20000M
	I0814 09:41:53.848120    3454 main.go:141] libmachine: STDOUT: Image resized.
	
	I0814 09:41:53.848135    3454 main.go:141] libmachine: STDERR: 
	I0814 09:41:53.848145    3454 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/multinode-157000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/multinode-157000/disk.qcow2
	I0814 09:41:53.848150    3454 main.go:141] libmachine: Starting QEMU VM...
	I0814 09:41:53.848166    3454 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:41:53.848198    3454 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/multinode-157000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/multinode-157000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/multinode-157000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:22:d5:26:4a:3d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/multinode-157000/disk.qcow2
	I0814 09:41:53.849749    3454 main.go:141] libmachine: STDOUT: 
	I0814 09:41:53.849773    3454 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:41:53.849792    3454 client.go:171] duration metric: took 294.240458ms to LocalClient.Create
	I0814 09:41:55.851926    3454 start.go:128] duration metric: took 2.323105375s to createHost
	I0814 09:41:55.851991    3454 start.go:83] releasing machines lock for "multinode-157000", held for 2.323228291s
	W0814 09:41:55.852035    3454 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:41:55.863240    3454 out.go:177] * Deleting "multinode-157000" in qemu2 ...
	W0814 09:41:55.904083    3454 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:41:55.904114    3454 start.go:729] Will try again in 5 seconds ...
	I0814 09:42:00.906172    3454 start.go:360] acquireMachinesLock for multinode-157000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:42:00.906714    3454 start.go:364] duration metric: took 400.625µs to acquireMachinesLock for "multinode-157000"
	I0814 09:42:00.906856    3454 start.go:93] Provisioning new machine with config: &{Name:multinode-157000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0 ClusterName:multinode-157000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0814 09:42:00.907196    3454 start.go:125] createHost starting for "" (driver="qemu2")
	I0814 09:42:00.924638    3454 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0814 09:42:00.977905    3454 start.go:159] libmachine.API.Create for "multinode-157000" (driver="qemu2")
	I0814 09:42:00.977955    3454 client.go:168] LocalClient.Create starting
	I0814 09:42:00.978072    3454 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem
	I0814 09:42:00.978134    3454 main.go:141] libmachine: Decoding PEM data...
	I0814 09:42:00.978152    3454 main.go:141] libmachine: Parsing certificate...
	I0814 09:42:00.978212    3454 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/cert.pem
	I0814 09:42:00.978256    3454 main.go:141] libmachine: Decoding PEM data...
	I0814 09:42:00.978271    3454 main.go:141] libmachine: Parsing certificate...
	I0814 09:42:00.979091    3454 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19446-1067/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0814 09:42:01.139554    3454 main.go:141] libmachine: Creating SSH key...
	I0814 09:42:01.285394    3454 main.go:141] libmachine: Creating Disk image...
	I0814 09:42:01.285400    3454 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0814 09:42:01.285645    3454 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/multinode-157000/disk.qcow2.raw /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/multinode-157000/disk.qcow2
	I0814 09:42:01.294892    3454 main.go:141] libmachine: STDOUT: 
	I0814 09:42:01.294912    3454 main.go:141] libmachine: STDERR: 
	I0814 09:42:01.294960    3454 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/multinode-157000/disk.qcow2 +20000M
	I0814 09:42:01.302937    3454 main.go:141] libmachine: STDOUT: Image resized.
	
	I0814 09:42:01.302957    3454 main.go:141] libmachine: STDERR: 
	I0814 09:42:01.302967    3454 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/multinode-157000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/multinode-157000/disk.qcow2
	I0814 09:42:01.302970    3454 main.go:141] libmachine: Starting QEMU VM...
	I0814 09:42:01.302983    3454 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:42:01.303027    3454 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/multinode-157000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/multinode-157000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/multinode-157000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:a4:9a:0c:b7:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/multinode-157000/disk.qcow2
	I0814 09:42:01.304576    3454 main.go:141] libmachine: STDOUT: 
	I0814 09:42:01.304595    3454 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:42:01.304610    3454 client.go:171] duration metric: took 326.662083ms to LocalClient.Create
	I0814 09:42:03.306718    3454 start.go:128] duration metric: took 2.399577625s to createHost
	I0814 09:42:03.306832    3454 start.go:83] releasing machines lock for "multinode-157000", held for 2.400167666s
	W0814 09:42:03.307255    3454 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-157000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-157000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:42:03.318034    3454 out.go:177] 
	W0814 09:42:03.321977    3454 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0814 09:42:03.322141    3454 out.go:239] * 
	* 
	W0814 09:42:03.327335    3454 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 09:42:03.333885    3454 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-157000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-157000 -n multinode-157000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-157000 -n multinode-157000: exit status 7 (58.023625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-157000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (10.00s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (93.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-157000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-157000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (125.342625ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-157000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-157000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-157000 -- rollout status deployment/busybox: exit status 1 (58.465709ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-157000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-157000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-157000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (57.739834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-157000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-157000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-157000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.064042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-157000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-157000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-157000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.526792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-157000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-157000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-157000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.990584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-157000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-157000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-157000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.391584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-157000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-157000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-157000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.829792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-157000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-157000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-157000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.368083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-157000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-157000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-157000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.157083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-157000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-157000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-157000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.20575ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-157000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-157000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-157000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.352708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-157000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-157000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-157000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.591791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-157000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-157000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-157000 -- exec  -- nslookup kubernetes.io: exit status 1 (56.842917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-157000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-157000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-157000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.033959ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-157000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-157000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-157000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (56.757167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-157000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-157000 -n multinode-157000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-157000 -n multinode-157000: exit status 7 (30.003917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-157000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (93.54s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-157000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-157000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.025084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-157000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-157000 -n multinode-157000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-157000 -n multinode-157000: exit status 7 (29.472333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-157000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-157000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-157000 -v 3 --alsologtostderr: exit status 83 (40.301792ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-157000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-157000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:43:37.060508    3538 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:43:37.060649    3538 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:43:37.060653    3538 out.go:304] Setting ErrFile to fd 2...
	I0814 09:43:37.060655    3538 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:43:37.060775    3538 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:43:37.061009    3538 mustload.go:65] Loading cluster: multinode-157000
	I0814 09:43:37.061217    3538 config.go:182] Loaded profile config "multinode-157000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:43:37.065088    3538 out.go:177] * The control-plane node multinode-157000 host is not running: state=Stopped
	I0814 09:43:37.066452    3538 out.go:177]   To start a cluster, run: "minikube start -p multinode-157000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-157000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-157000 -n multinode-157000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-157000 -n multinode-157000: exit status 7 (30.550542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-157000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-157000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-157000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (28.463583ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-157000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-157000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-157000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-157000 -n multinode-157000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-157000 -n multinode-157000: exit status 7 (30.659042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-157000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-157000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-157000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-157000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"multinode-157000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-157000 -n multinode-157000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-157000 -n multinode-157000: exit status 7 (29.832708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-157000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-157000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-157000 status --output json --alsologtostderr: exit status 7 (29.386542ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-157000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:43:37.264673    3550 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:43:37.264819    3550 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:43:37.264826    3550 out.go:304] Setting ErrFile to fd 2...
	I0814 09:43:37.264829    3550 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:43:37.264942    3550 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:43:37.265045    3550 out.go:298] Setting JSON to true
	I0814 09:43:37.265064    3550 mustload.go:65] Loading cluster: multinode-157000
	I0814 09:43:37.265112    3550 notify.go:220] Checking for updates...
	I0814 09:43:37.265251    3550 config.go:182] Loaded profile config "multinode-157000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:43:37.265256    3550 status.go:255] checking status of multinode-157000 ...
	I0814 09:43:37.265461    3550 status.go:330] multinode-157000 host status = "Stopped" (err=<nil>)
	I0814 09:43:37.265466    3550 status.go:343] host is not running, skipping remaining checks
	I0814 09:43:37.265468    3550 status.go:257] multinode-157000 status: &{Name:multinode-157000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-157000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-157000 -n multinode-157000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-157000 -n multinode-157000: exit status 7 (30.633917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-157000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-157000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-157000 node stop m03: exit status 85 (47.225167ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-157000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-157000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-157000 status: exit status 7 (29.873584ms)

                                                
                                                
-- stdout --
	multinode-157000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-157000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-157000 status --alsologtostderr: exit status 7 (30.442875ms)

                                                
                                                
-- stdout --
	multinode-157000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:43:37.403567    3558 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:43:37.403709    3558 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:43:37.403712    3558 out.go:304] Setting ErrFile to fd 2...
	I0814 09:43:37.403715    3558 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:43:37.403855    3558 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:43:37.403970    3558 out.go:298] Setting JSON to false
	I0814 09:43:37.403982    3558 mustload.go:65] Loading cluster: multinode-157000
	I0814 09:43:37.404048    3558 notify.go:220] Checking for updates...
	I0814 09:43:37.404198    3558 config.go:182] Loaded profile config "multinode-157000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:43:37.404204    3558 status.go:255] checking status of multinode-157000 ...
	I0814 09:43:37.404407    3558 status.go:330] multinode-157000 host status = "Stopped" (err=<nil>)
	I0814 09:43:37.404412    3558 status.go:343] host is not running, skipping remaining checks
	I0814 09:43:37.404415    3558 status.go:257] multinode-157000 status: &{Name:multinode-157000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-157000 status --alsologtostderr": multinode-157000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-157000 -n multinode-157000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-157000 -n multinode-157000: exit status 7 (30.611042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-157000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (43s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-157000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-157000 node start m03 -v=7 --alsologtostderr: exit status 85 (44.619458ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:43:37.464655    3562 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:43:37.464888    3562 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:43:37.464892    3562 out.go:304] Setting ErrFile to fd 2...
	I0814 09:43:37.464894    3562 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:43:37.465042    3562 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:43:37.465277    3562 mustload.go:65] Loading cluster: multinode-157000
	I0814 09:43:37.465458    3562 config.go:182] Loaded profile config "multinode-157000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:43:37.469994    3562 out.go:177] 
	W0814 09:43:37.471087    3562 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0814 09:43:37.471092    3562 out.go:239] * 
	* 
	W0814 09:43:37.472692    3562 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 09:43:37.475900    3562 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0814 09:43:37.464655    3562 out.go:291] Setting OutFile to fd 1 ...
I0814 09:43:37.464888    3562 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0814 09:43:37.464892    3562 out.go:304] Setting ErrFile to fd 2...
I0814 09:43:37.464894    3562 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0814 09:43:37.465042    3562 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
I0814 09:43:37.465277    3562 mustload.go:65] Loading cluster: multinode-157000
I0814 09:43:37.465458    3562 config.go:182] Loaded profile config "multinode-157000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0814 09:43:37.469994    3562 out.go:177] 
W0814 09:43:37.471087    3562 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0814 09:43:37.471092    3562 out.go:239] * 
* 
W0814 09:43:37.472692    3562 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0814 09:43:37.475900    3562 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-157000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-157000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-157000 status -v=7 --alsologtostderr: exit status 7 (30.595083ms)

                                                
                                                
-- stdout --
	multinode-157000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:43:37.509834    3564 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:43:37.509992    3564 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:43:37.509995    3564 out.go:304] Setting ErrFile to fd 2...
	I0814 09:43:37.509997    3564 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:43:37.510122    3564 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:43:37.510240    3564 out.go:298] Setting JSON to false
	I0814 09:43:37.510251    3564 mustload.go:65] Loading cluster: multinode-157000
	I0814 09:43:37.510313    3564 notify.go:220] Checking for updates...
	I0814 09:43:37.510465    3564 config.go:182] Loaded profile config "multinode-157000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:43:37.510469    3564 status.go:255] checking status of multinode-157000 ...
	I0814 09:43:37.510685    3564 status.go:330] multinode-157000 host status = "Stopped" (err=<nil>)
	I0814 09:43:37.510690    3564 status.go:343] host is not running, skipping remaining checks
	I0814 09:43:37.510692    3564 status.go:257] multinode-157000 status: &{Name:multinode-157000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-157000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-157000 status -v=7 --alsologtostderr: exit status 7 (74.848583ms)

                                                
                                                
-- stdout --
	multinode-157000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:43:38.615775    3566 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:43:38.615975    3566 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:43:38.615980    3566 out.go:304] Setting ErrFile to fd 2...
	I0814 09:43:38.615983    3566 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:43:38.616158    3566 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:43:38.616313    3566 out.go:298] Setting JSON to false
	I0814 09:43:38.616328    3566 mustload.go:65] Loading cluster: multinode-157000
	I0814 09:43:38.616376    3566 notify.go:220] Checking for updates...
	I0814 09:43:38.616626    3566 config.go:182] Loaded profile config "multinode-157000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:43:38.616632    3566 status.go:255] checking status of multinode-157000 ...
	I0814 09:43:38.616905    3566 status.go:330] multinode-157000 host status = "Stopped" (err=<nil>)
	I0814 09:43:38.616911    3566 status.go:343] host is not running, skipping remaining checks
	I0814 09:43:38.616914    3566 status.go:257] multinode-157000 status: &{Name:multinode-157000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-157000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-157000 status -v=7 --alsologtostderr: exit status 7 (72.716459ms)

                                                
                                                
-- stdout --
	multinode-157000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:43:40.346906    3568 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:43:40.347128    3568 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:43:40.347132    3568 out.go:304] Setting ErrFile to fd 2...
	I0814 09:43:40.347135    3568 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:43:40.347325    3568 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:43:40.347502    3568 out.go:298] Setting JSON to false
	I0814 09:43:40.347518    3568 mustload.go:65] Loading cluster: multinode-157000
	I0814 09:43:40.347559    3568 notify.go:220] Checking for updates...
	I0814 09:43:40.347800    3568 config.go:182] Loaded profile config "multinode-157000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:43:40.347808    3568 status.go:255] checking status of multinode-157000 ...
	I0814 09:43:40.348107    3568 status.go:330] multinode-157000 host status = "Stopped" (err=<nil>)
	I0814 09:43:40.348113    3568 status.go:343] host is not running, skipping remaining checks
	I0814 09:43:40.348116    3568 status.go:257] multinode-157000 status: &{Name:multinode-157000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-157000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-157000 status -v=7 --alsologtostderr: exit status 7 (72.412583ms)

                                                
                                                
-- stdout --
	multinode-157000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:43:41.812388    3570 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:43:41.812589    3570 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:43:41.812594    3570 out.go:304] Setting ErrFile to fd 2...
	I0814 09:43:41.812598    3570 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:43:41.812783    3570 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:43:41.812973    3570 out.go:298] Setting JSON to false
	I0814 09:43:41.812991    3570 mustload.go:65] Loading cluster: multinode-157000
	I0814 09:43:41.813025    3570 notify.go:220] Checking for updates...
	I0814 09:43:41.813263    3570 config.go:182] Loaded profile config "multinode-157000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:43:41.813273    3570 status.go:255] checking status of multinode-157000 ...
	I0814 09:43:41.813563    3570 status.go:330] multinode-157000 host status = "Stopped" (err=<nil>)
	I0814 09:43:41.813569    3570 status.go:343] host is not running, skipping remaining checks
	I0814 09:43:41.813572    3570 status.go:257] multinode-157000 status: &{Name:multinode-157000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-157000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-157000 status -v=7 --alsologtostderr: exit status 7 (68.841125ms)

                                                
                                                
-- stdout --
	multinode-157000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:43:43.855797    3572 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:43:43.856012    3572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:43:43.856017    3572 out.go:304] Setting ErrFile to fd 2...
	I0814 09:43:43.856021    3572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:43:43.856205    3572 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:43:43.856363    3572 out.go:298] Setting JSON to false
	I0814 09:43:43.856380    3572 mustload.go:65] Loading cluster: multinode-157000
	I0814 09:43:43.856414    3572 notify.go:220] Checking for updates...
	I0814 09:43:43.856658    3572 config.go:182] Loaded profile config "multinode-157000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:43:43.856667    3572 status.go:255] checking status of multinode-157000 ...
	I0814 09:43:43.856955    3572 status.go:330] multinode-157000 host status = "Stopped" (err=<nil>)
	I0814 09:43:43.856962    3572 status.go:343] host is not running, skipping remaining checks
	I0814 09:43:43.856965    3572 status.go:257] multinode-157000 status: &{Name:multinode-157000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-157000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-157000 status -v=7 --alsologtostderr: exit status 7 (72.663083ms)

                                                
                                                
-- stdout --
	multinode-157000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:43:46.739246    3574 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:43:46.739791    3574 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:43:46.739817    3574 out.go:304] Setting ErrFile to fd 2...
	I0814 09:43:46.739824    3574 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:43:46.740487    3574 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:43:46.740833    3574 out.go:298] Setting JSON to false
	I0814 09:43:46.740854    3574 mustload.go:65] Loading cluster: multinode-157000
	I0814 09:43:46.740879    3574 notify.go:220] Checking for updates...
	I0814 09:43:46.741139    3574 config.go:182] Loaded profile config "multinode-157000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:43:46.741148    3574 status.go:255] checking status of multinode-157000 ...
	I0814 09:43:46.741467    3574 status.go:330] multinode-157000 host status = "Stopped" (err=<nil>)
	I0814 09:43:46.741475    3574 status.go:343] host is not running, skipping remaining checks
	I0814 09:43:46.741479    3574 status.go:257] multinode-157000 status: &{Name:multinode-157000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-157000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-157000 status -v=7 --alsologtostderr: exit status 7 (73.571ms)

                                                
                                                
-- stdout --
	multinode-157000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:43:55.825367    3579 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:43:55.825565    3579 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:43:55.825569    3579 out.go:304] Setting ErrFile to fd 2...
	I0814 09:43:55.825572    3579 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:43:55.825731    3579 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:43:55.825875    3579 out.go:298] Setting JSON to false
	I0814 09:43:55.825890    3579 mustload.go:65] Loading cluster: multinode-157000
	I0814 09:43:55.825939    3579 notify.go:220] Checking for updates...
	I0814 09:43:55.826165    3579 config.go:182] Loaded profile config "multinode-157000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:43:55.826180    3579 status.go:255] checking status of multinode-157000 ...
	I0814 09:43:55.826473    3579 status.go:330] multinode-157000 host status = "Stopped" (err=<nil>)
	I0814 09:43:55.826479    3579 status.go:343] host is not running, skipping remaining checks
	I0814 09:43:55.826482    3579 status.go:257] multinode-157000 status: &{Name:multinode-157000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
E0814 09:43:59.937950    1600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/functional-363000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-157000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-157000 status -v=7 --alsologtostderr: exit status 7 (72.613042ms)

                                                
                                                
-- stdout --
	multinode-157000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:44:05.372549    3586 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:44:05.372732    3586 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:44:05.372737    3586 out.go:304] Setting ErrFile to fd 2...
	I0814 09:44:05.372740    3586 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:44:05.372919    3586 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:44:05.373060    3586 out.go:298] Setting JSON to false
	I0814 09:44:05.373079    3586 mustload.go:65] Loading cluster: multinode-157000
	I0814 09:44:05.373111    3586 notify.go:220] Checking for updates...
	I0814 09:44:05.373340    3586 config.go:182] Loaded profile config "multinode-157000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:44:05.373347    3586 status.go:255] checking status of multinode-157000 ...
	I0814 09:44:05.373623    3586 status.go:330] multinode-157000 host status = "Stopped" (err=<nil>)
	I0814 09:44:05.373629    3586 status.go:343] host is not running, skipping remaining checks
	I0814 09:44:05.373632    3586 status.go:257] multinode-157000 status: &{Name:multinode-157000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-157000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-157000 status -v=7 --alsologtostderr: exit status 7 (58.925458ms)

                                                
                                                
-- stdout --
	multinode-157000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:44:20.395318    3588 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:44:20.395475    3588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:44:20.395479    3588 out.go:304] Setting ErrFile to fd 2...
	I0814 09:44:20.395481    3588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:44:20.395635    3588 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:44:20.395785    3588 out.go:298] Setting JSON to false
	I0814 09:44:20.395801    3588 mustload.go:65] Loading cluster: multinode-157000
	I0814 09:44:20.395822    3588 notify.go:220] Checking for updates...
	I0814 09:44:20.396025    3588 config.go:182] Loaded profile config "multinode-157000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:44:20.396032    3588 status.go:255] checking status of multinode-157000 ...
	I0814 09:44:20.396283    3588 status.go:330] multinode-157000 host status = "Stopped" (err=<nil>)
	I0814 09:44:20.396288    3588 status.go:343] host is not running, skipping remaining checks
	I0814 09:44:20.396291    3588 status.go:257] multinode-157000 status: &{Name:multinode-157000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-157000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-157000 -n multinode-157000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-157000 -n multinode-157000: exit status 7 (33.063042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-157000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (43.00s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-157000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-157000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-157000: (3.578402375s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-157000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-157000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.21659775s)

                                                
                                                
-- stdout --
	* [multinode-157000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19446
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-157000" primary control-plane node in "multinode-157000" cluster
	* Restarting existing qemu2 VM for "multinode-157000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-157000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:44:24.100072    3617 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:44:24.100215    3617 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:44:24.100220    3617 out.go:304] Setting ErrFile to fd 2...
	I0814 09:44:24.100222    3617 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:44:24.100402    3617 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:44:24.101580    3617 out.go:298] Setting JSON to false
	I0814 09:44:24.120064    3617 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2621,"bootTime":1723651243,"procs":498,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0814 09:44:24.120133    3617 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0814 09:44:24.125594    3617 out.go:177] * [multinode-157000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0814 09:44:24.132554    3617 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 09:44:24.132590    3617 notify.go:220] Checking for updates...
	I0814 09:44:24.138522    3617 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	I0814 09:44:24.141550    3617 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0814 09:44:24.144595    3617 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 09:44:24.147510    3617 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	I0814 09:44:24.150551    3617 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 09:44:24.153789    3617 config.go:182] Loaded profile config "multinode-157000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:44:24.153846    3617 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 09:44:24.158661    3617 out.go:177] * Using the qemu2 driver based on existing profile
	I0814 09:44:24.164498    3617 start.go:297] selected driver: qemu2
	I0814 09:44:24.164505    3617 start.go:901] validating driver "qemu2" against &{Name:multinode-157000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:multinode-157000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 09:44:24.164557    3617 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 09:44:24.166884    3617 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 09:44:24.166934    3617 cni.go:84] Creating CNI manager for ""
	I0814 09:44:24.166939    3617 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0814 09:44:24.166978    3617 start.go:340] cluster config:
	{Name:multinode-157000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-157000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 09:44:24.170525    3617 iso.go:125] acquiring lock: {Name:mkda50f549908e1040ce4e19ea38376a2f640f3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:44:24.178508    3617 out.go:177] * Starting "multinode-157000" primary control-plane node in "multinode-157000" cluster
	I0814 09:44:24.182522    3617 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0814 09:44:24.182537    3617 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0814 09:44:24.182546    3617 cache.go:56] Caching tarball of preloaded images
	I0814 09:44:24.182603    3617 preload.go:172] Found /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0814 09:44:24.182608    3617 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0814 09:44:24.182680    3617 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/multinode-157000/config.json ...
	I0814 09:44:24.183110    3617 start.go:360] acquireMachinesLock for multinode-157000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:44:24.183146    3617 start.go:364] duration metric: took 29.708µs to acquireMachinesLock for "multinode-157000"
	I0814 09:44:24.183156    3617 start.go:96] Skipping create...Using existing machine configuration
	I0814 09:44:24.183160    3617 fix.go:54] fixHost starting: 
	I0814 09:44:24.183286    3617 fix.go:112] recreateIfNeeded on multinode-157000: state=Stopped err=<nil>
	W0814 09:44:24.183296    3617 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 09:44:24.187500    3617 out.go:177] * Restarting existing qemu2 VM for "multinode-157000" ...
	I0814 09:44:24.195534    3617 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:44:24.195584    3617 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/multinode-157000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/multinode-157000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/multinode-157000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:a4:9a:0c:b7:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/multinode-157000/disk.qcow2
	I0814 09:44:24.197622    3617 main.go:141] libmachine: STDOUT: 
	I0814 09:44:24.197642    3617 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:44:24.197667    3617 fix.go:56] duration metric: took 14.507042ms for fixHost
	I0814 09:44:24.197672    3617 start.go:83] releasing machines lock for "multinode-157000", held for 14.522167ms
	W0814 09:44:24.197678    3617 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0814 09:44:24.197714    3617 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:44:24.197719    3617 start.go:729] Will try again in 5 seconds ...
	I0814 09:44:29.199674    3617 start.go:360] acquireMachinesLock for multinode-157000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:44:29.200025    3617 start.go:364] duration metric: took 271.709µs to acquireMachinesLock for "multinode-157000"
	I0814 09:44:29.200161    3617 start.go:96] Skipping create...Using existing machine configuration
	I0814 09:44:29.200181    3617 fix.go:54] fixHost starting: 
	I0814 09:44:29.200879    3617 fix.go:112] recreateIfNeeded on multinode-157000: state=Stopped err=<nil>
	W0814 09:44:29.200905    3617 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 09:44:29.208207    3617 out.go:177] * Restarting existing qemu2 VM for "multinode-157000" ...
	I0814 09:44:29.212245    3617 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:44:29.212432    3617 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/multinode-157000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/multinode-157000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/multinode-157000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:a4:9a:0c:b7:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/multinode-157000/disk.qcow2
	I0814 09:44:29.221345    3617 main.go:141] libmachine: STDOUT: 
	I0814 09:44:29.221419    3617 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:44:29.221511    3617 fix.go:56] duration metric: took 21.3255ms for fixHost
	I0814 09:44:29.221527    3617 start.go:83] releasing machines lock for "multinode-157000", held for 21.479125ms
	W0814 09:44:29.221754    3617 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-157000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-157000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:44:29.229244    3617 out.go:177] 
	W0814 09:44:29.233356    3617 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0814 09:44:29.233384    3617 out.go:239] * 
	* 
	W0814 09:44:29.235928    3617 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 09:44:29.244242    3617 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-157000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-157000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-157000 -n multinode-157000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-157000 -n multinode-157000: exit status 7 (33.597417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-157000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.93s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-157000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-157000 node delete m03: exit status 83 (39.719625ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-157000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-157000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-157000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-157000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-157000 status --alsologtostderr: exit status 7 (29.292125ms)

                                                
                                                
-- stdout --
	multinode-157000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:44:29.428591    3631 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:44:29.428726    3631 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:44:29.428730    3631 out.go:304] Setting ErrFile to fd 2...
	I0814 09:44:29.428732    3631 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:44:29.428858    3631 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:44:29.428968    3631 out.go:298] Setting JSON to false
	I0814 09:44:29.428984    3631 mustload.go:65] Loading cluster: multinode-157000
	I0814 09:44:29.429026    3631 notify.go:220] Checking for updates...
	I0814 09:44:29.429172    3631 config.go:182] Loaded profile config "multinode-157000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:44:29.429183    3631 status.go:255] checking status of multinode-157000 ...
	I0814 09:44:29.429374    3631 status.go:330] multinode-157000 host status = "Stopped" (err=<nil>)
	I0814 09:44:29.429378    3631 status.go:343] host is not running, skipping remaining checks
	I0814 09:44:29.429380    3631 status.go:257] multinode-157000 status: &{Name:multinode-157000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-157000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-157000 -n multinode-157000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-157000 -n multinode-157000: exit status 7 (30.180833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-157000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (2.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-157000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-157000 stop: (2.126689958s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-157000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-157000 status: exit status 7 (63.9645ms)

                                                
                                                
-- stdout --
	multinode-157000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-157000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-157000 status --alsologtostderr: exit status 7 (32.634917ms)

                                                
                                                
-- stdout --
	multinode-157000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:44:31.682489    3649 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:44:31.682621    3649 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:44:31.682625    3649 out.go:304] Setting ErrFile to fd 2...
	I0814 09:44:31.682627    3649 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:44:31.682769    3649 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:44:31.682886    3649 out.go:298] Setting JSON to false
	I0814 09:44:31.682897    3649 mustload.go:65] Loading cluster: multinode-157000
	I0814 09:44:31.682962    3649 notify.go:220] Checking for updates...
	I0814 09:44:31.683114    3649 config.go:182] Loaded profile config "multinode-157000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:44:31.683119    3649 status.go:255] checking status of multinode-157000 ...
	I0814 09:44:31.683326    3649 status.go:330] multinode-157000 host status = "Stopped" (err=<nil>)
	I0814 09:44:31.683330    3649 status.go:343] host is not running, skipping remaining checks
	I0814 09:44:31.683333    3649 status.go:257] multinode-157000 status: &{Name:multinode-157000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-157000 status --alsologtostderr": multinode-157000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-157000 status --alsologtostderr": multinode-157000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-157000 -n multinode-157000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-157000 -n multinode-157000: exit status 7 (30.571417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-157000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (2.25s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-157000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-157000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.184408291s)

                                                
                                                
-- stdout --
	* [multinode-157000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19446
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-157000" primary control-plane node in "multinode-157000" cluster
	* Restarting existing qemu2 VM for "multinode-157000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-157000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:44:31.743135    3653 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:44:31.743266    3653 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:44:31.743269    3653 out.go:304] Setting ErrFile to fd 2...
	I0814 09:44:31.743272    3653 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:44:31.743422    3653 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:44:31.744412    3653 out.go:298] Setting JSON to false
	I0814 09:44:31.760675    3653 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2628,"bootTime":1723651243,"procs":498,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0814 09:44:31.760747    3653 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0814 09:44:31.765439    3653 out.go:177] * [multinode-157000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0814 09:44:31.772412    3653 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 09:44:31.772475    3653 notify.go:220] Checking for updates...
	I0814 09:44:31.779319    3653 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	I0814 09:44:31.782305    3653 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0814 09:44:31.785304    3653 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 09:44:31.788309    3653 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	I0814 09:44:31.791340    3653 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 09:44:31.794580    3653 config.go:182] Loaded profile config "multinode-157000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:44:31.794842    3653 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 09:44:31.799246    3653 out.go:177] * Using the qemu2 driver based on existing profile
	I0814 09:44:31.806310    3653 start.go:297] selected driver: qemu2
	I0814 09:44:31.806317    3653 start.go:901] validating driver "qemu2" against &{Name:multinode-157000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:multinode-157000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 09:44:31.806381    3653 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 09:44:31.808653    3653 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 09:44:31.808704    3653 cni.go:84] Creating CNI manager for ""
	I0814 09:44:31.808709    3653 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0814 09:44:31.808761    3653 start.go:340] cluster config:
	{Name:multinode-157000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-157000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 09:44:31.812310    3653 iso.go:125] acquiring lock: {Name:mkda50f549908e1040ce4e19ea38376a2f640f3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:44:31.820270    3653 out.go:177] * Starting "multinode-157000" primary control-plane node in "multinode-157000" cluster
	I0814 09:44:31.823221    3653 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0814 09:44:31.823240    3653 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0814 09:44:31.823252    3653 cache.go:56] Caching tarball of preloaded images
	I0814 09:44:31.823307    3653 preload.go:172] Found /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0814 09:44:31.823313    3653 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0814 09:44:31.823384    3653 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/multinode-157000/config.json ...
	I0814 09:44:31.823849    3653 start.go:360] acquireMachinesLock for multinode-157000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:44:31.823876    3653 start.go:364] duration metric: took 21.542µs to acquireMachinesLock for "multinode-157000"
	I0814 09:44:31.823886    3653 start.go:96] Skipping create...Using existing machine configuration
	I0814 09:44:31.823890    3653 fix.go:54] fixHost starting: 
	I0814 09:44:31.824005    3653 fix.go:112] recreateIfNeeded on multinode-157000: state=Stopped err=<nil>
	W0814 09:44:31.824015    3653 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 09:44:31.832126    3653 out.go:177] * Restarting existing qemu2 VM for "multinode-157000" ...
	I0814 09:44:31.836248    3653 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:44:31.836299    3653 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/multinode-157000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/multinode-157000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/multinode-157000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:a4:9a:0c:b7:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/multinode-157000/disk.qcow2
	I0814 09:44:31.838316    3653 main.go:141] libmachine: STDOUT: 
	I0814 09:44:31.838350    3653 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:44:31.838378    3653 fix.go:56] duration metric: took 14.487042ms for fixHost
	I0814 09:44:31.838382    3653 start.go:83] releasing machines lock for "multinode-157000", held for 14.501959ms
	W0814 09:44:31.838388    3653 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0814 09:44:31.838419    3653 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:44:31.838424    3653 start.go:729] Will try again in 5 seconds ...
	I0814 09:44:36.840415    3653 start.go:360] acquireMachinesLock for multinode-157000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:44:36.840882    3653 start.go:364] duration metric: took 349.167µs to acquireMachinesLock for "multinode-157000"
	I0814 09:44:36.841006    3653 start.go:96] Skipping create...Using existing machine configuration
	I0814 09:44:36.841025    3653 fix.go:54] fixHost starting: 
	I0814 09:44:36.841730    3653 fix.go:112] recreateIfNeeded on multinode-157000: state=Stopped err=<nil>
	W0814 09:44:36.841757    3653 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 09:44:36.846143    3653 out.go:177] * Restarting existing qemu2 VM for "multinode-157000" ...
	I0814 09:44:36.855151    3653 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:44:36.855418    3653 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/multinode-157000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/multinode-157000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/multinode-157000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:a4:9a:0c:b7:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/multinode-157000/disk.qcow2
	I0814 09:44:36.864238    3653 main.go:141] libmachine: STDOUT: 
	I0814 09:44:36.864310    3653 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:44:36.864373    3653 fix.go:56] duration metric: took 23.34975ms for fixHost
	I0814 09:44:36.864387    3653 start.go:83] releasing machines lock for "multinode-157000", held for 23.487ms
	W0814 09:44:36.864613    3653 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-157000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-157000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:44:36.872108    3653 out.go:177] 
	W0814 09:44:36.876203    3653 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0814 09:44:36.876255    3653 out.go:239] * 
	* 
	W0814 09:44:36.879082    3653 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 09:44:36.885997    3653 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-157000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-157000 -n multinode-157000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-157000 -n multinode-157000: exit status 7 (66.805791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-157000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-157000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-157000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-157000-m01 --driver=qemu2 : exit status 80 (9.860153042s)

                                                
                                                
-- stdout --
	* [multinode-157000-m01] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19446
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-157000-m01" primary control-plane node in "multinode-157000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-157000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-157000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-157000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-157000-m02 --driver=qemu2 : exit status 80 (9.988616834s)

                                                
                                                
-- stdout --
	* [multinode-157000-m02] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19446
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-157000-m02" primary control-plane node in "multinode-157000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-157000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-157000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-157000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-157000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-157000: exit status 83 (83.501083ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-157000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-157000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-157000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-157000 -n multinode-157000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-157000 -n multinode-157000: exit status 7 (30.747959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-157000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.08s)

                                                
                                    
x
+
TestPreload (10.11s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-717000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-717000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.962179541s)

                                                
                                                
-- stdout --
	* [test-preload-717000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19446
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-717000" primary control-plane node in "test-preload-717000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-717000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:44:57.183495    3708 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:44:57.183619    3708 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:44:57.183622    3708 out.go:304] Setting ErrFile to fd 2...
	I0814 09:44:57.183624    3708 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:44:57.183756    3708 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:44:57.184842    3708 out.go:298] Setting JSON to false
	I0814 09:44:57.200864    3708 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2654,"bootTime":1723651243,"procs":495,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0814 09:44:57.200930    3708 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0814 09:44:57.204863    3708 out.go:177] * [test-preload-717000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0814 09:44:57.212800    3708 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 09:44:57.212860    3708 notify.go:220] Checking for updates...
	I0814 09:44:57.217786    3708 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	I0814 09:44:57.220775    3708 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0814 09:44:57.223792    3708 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 09:44:57.226747    3708 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	I0814 09:44:57.229821    3708 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 09:44:57.233136    3708 config.go:182] Loaded profile config "multinode-157000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:44:57.233184    3708 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 09:44:57.237706    3708 out.go:177] * Using the qemu2 driver based on user configuration
	I0814 09:44:57.244788    3708 start.go:297] selected driver: qemu2
	I0814 09:44:57.244797    3708 start.go:901] validating driver "qemu2" against <nil>
	I0814 09:44:57.244805    3708 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 09:44:57.247068    3708 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0814 09:44:57.249747    3708 out.go:177] * Automatically selected the socket_vmnet network
	I0814 09:44:57.252854    3708 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 09:44:57.252876    3708 cni.go:84] Creating CNI manager for ""
	I0814 09:44:57.252885    3708 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0814 09:44:57.252890    3708 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0814 09:44:57.252919    3708 start.go:340] cluster config:
	{Name:test-preload-717000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-717000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 09:44:57.256608    3708 iso.go:125] acquiring lock: {Name:mkda50f549908e1040ce4e19ea38376a2f640f3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:44:57.262754    3708 out.go:177] * Starting "test-preload-717000" primary control-plane node in "test-preload-717000" cluster
	I0814 09:44:57.266790    3708 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0814 09:44:57.266893    3708 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/test-preload-717000/config.json ...
	I0814 09:44:57.266917    3708 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/test-preload-717000/config.json: {Name:mk2ad7c56e926682b5eb56f40719e360b3569d11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:44:57.266921    3708 cache.go:107] acquiring lock: {Name:mk5fd861231df5b1cda3ff3fa54d336af27b1727 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:44:57.266928    3708 cache.go:107] acquiring lock: {Name:mkec768f75f40d6661fb743d1bfc28d689e18d93 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:44:57.266928    3708 cache.go:107] acquiring lock: {Name:mkadb74dc2d678918ce7a1e2b5b5e7f49621b70e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:44:57.267091    3708 cache.go:107] acquiring lock: {Name:mk066903400a08e49bf9704c694596ae23bf3559 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:44:57.267185    3708 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0814 09:44:57.267185    3708 cache.go:107] acquiring lock: {Name:mk7187998cf292c0090c9c06527f834b21d526a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:44:57.267197    3708 cache.go:107] acquiring lock: {Name:mk4ca9a6f7607cc0115fa0d6c57e3d808291a273 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:44:57.267212    3708 cache.go:107] acquiring lock: {Name:mkdf199de92dd149b0e34e3ea9d2f2f5bd742d4a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:44:57.267210    3708 cache.go:107] acquiring lock: {Name:mk8c7d16e9cab6c2471653e057c25bf59cdf6961 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:44:57.267342    3708 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0814 09:44:57.267373    3708 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0814 09:44:57.267376    3708 start.go:360] acquireMachinesLock for test-preload-717000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:44:57.267409    3708 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0814 09:44:57.267433    3708 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0814 09:44:57.267187    3708 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0814 09:44:57.267459    3708 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0814 09:44:57.267502    3708 start.go:364] duration metric: took 111.042µs to acquireMachinesLock for "test-preload-717000"
	I0814 09:44:57.267527    3708 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 09:44:57.267515    3708 start.go:93] Provisioning new machine with config: &{Name:test-preload-717000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-717000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0814 09:44:57.267557    3708 start.go:125] createHost starting for "" (driver="qemu2")
	I0814 09:44:57.274792    3708 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0814 09:44:57.278449    3708 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0814 09:44:57.278563    3708 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 09:44:57.280367    3708 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0814 09:44:57.280485    3708 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0814 09:44:57.281216    3708 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0814 09:44:57.281212    3708 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0814 09:44:57.281259    3708 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0814 09:44:57.281300    3708 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0814 09:44:57.292024    3708 start.go:159] libmachine.API.Create for "test-preload-717000" (driver="qemu2")
	I0814 09:44:57.292040    3708 client.go:168] LocalClient.Create starting
	I0814 09:44:57.292120    3708 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem
	I0814 09:44:57.292151    3708 main.go:141] libmachine: Decoding PEM data...
	I0814 09:44:57.292164    3708 main.go:141] libmachine: Parsing certificate...
	I0814 09:44:57.292203    3708 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/cert.pem
	I0814 09:44:57.292226    3708 main.go:141] libmachine: Decoding PEM data...
	I0814 09:44:57.292236    3708 main.go:141] libmachine: Parsing certificate...
	I0814 09:44:57.292619    3708 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19446-1067/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0814 09:44:57.445424    3708 main.go:141] libmachine: Creating SSH key...
	I0814 09:44:57.617662    3708 main.go:141] libmachine: Creating Disk image...
	I0814 09:44:57.617682    3708 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0814 09:44:57.617878    3708 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/test-preload-717000/disk.qcow2.raw /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/test-preload-717000/disk.qcow2
	I0814 09:44:57.627669    3708 main.go:141] libmachine: STDOUT: 
	I0814 09:44:57.627690    3708 main.go:141] libmachine: STDERR: 
	I0814 09:44:57.627758    3708 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/test-preload-717000/disk.qcow2 +20000M
	I0814 09:44:57.636775    3708 main.go:141] libmachine: STDOUT: Image resized.
	
	I0814 09:44:57.636796    3708 main.go:141] libmachine: STDERR: 
	I0814 09:44:57.636809    3708 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/test-preload-717000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/test-preload-717000/disk.qcow2
	I0814 09:44:57.636813    3708 main.go:141] libmachine: Starting QEMU VM...
	I0814 09:44:57.636826    3708 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:44:57.636867    3708 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/test-preload-717000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/test-preload-717000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/test-preload-717000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:98:88:50:48:09 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/test-preload-717000/disk.qcow2
	I0814 09:44:57.638728    3708 main.go:141] libmachine: STDOUT: 
	I0814 09:44:57.638748    3708 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:44:57.638773    3708 client.go:171] duration metric: took 346.738125ms to LocalClient.Create
	I0814 09:44:57.749573    3708 cache.go:162] opening:  /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0814 09:44:57.772888    3708 cache.go:162] opening:  /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0814 09:44:57.792825    3708 cache.go:162] opening:  /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0814 09:44:57.819803    3708 cache.go:162] opening:  /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0814 09:44:57.840417    3708 cache.go:162] opening:  /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	W0814 09:44:57.853606    3708 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0814 09:44:57.853647    3708 cache.go:162] opening:  /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0814 09:44:57.894293    3708 cache.go:162] opening:  /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0814 09:44:57.984959    3708 cache.go:157] /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0814 09:44:57.985009    3708 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 717.960792ms
	I0814 09:44:57.985044    3708 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0814 09:44:58.088784    3708 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0814 09:44:58.088890    3708 cache.go:162] opening:  /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0814 09:44:58.387979    3708 cache.go:157] /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0814 09:44:58.388041    3708 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.121156875s
	I0814 09:44:58.388081    3708 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0814 09:44:59.638962    3708 start.go:128] duration metric: took 2.371454958s to createHost
	I0814 09:44:59.639045    3708 start.go:83] releasing machines lock for "test-preload-717000", held for 2.371617041s
	W0814 09:44:59.639089    3708 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:44:59.654472    3708 out.go:177] * Deleting "test-preload-717000" in qemu2 ...
	W0814 09:44:59.687783    3708 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:44:59.687814    3708 start.go:729] Will try again in 5 seconds ...
	I0814 09:45:00.090859    3708 cache.go:157] /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0814 09:45:00.090946    3708 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.823912375s
	I0814 09:45:00.090973    3708 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0814 09:45:00.701666    3708 cache.go:157] /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0814 09:45:00.701716    3708 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.434674625s
	I0814 09:45:00.701777    3708 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0814 09:45:02.301630    3708 cache.go:157] /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0814 09:45:02.301682    3708 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.034934625s
	I0814 09:45:02.301712    3708 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0814 09:45:02.977691    3708 cache.go:157] /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0814 09:45:02.977767    3708 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.71103625s
	I0814 09:45:02.977805    3708 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0814 09:45:04.177286    3708 cache.go:157] /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0814 09:45:04.177329    3708 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.910370333s
	I0814 09:45:04.177356    3708 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0814 09:45:04.689771    3708 start.go:360] acquireMachinesLock for test-preload-717000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:45:04.690234    3708 start.go:364] duration metric: took 383.125µs to acquireMachinesLock for "test-preload-717000"
	I0814 09:45:04.690342    3708 start.go:93] Provisioning new machine with config: &{Name:test-preload-717000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-717000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0814 09:45:04.690577    3708 start.go:125] createHost starting for "" (driver="qemu2")
	I0814 09:45:04.701170    3708 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0814 09:45:04.752422    3708 start.go:159] libmachine.API.Create for "test-preload-717000" (driver="qemu2")
	I0814 09:45:04.752467    3708 client.go:168] LocalClient.Create starting
	I0814 09:45:04.752584    3708 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem
	I0814 09:45:04.752648    3708 main.go:141] libmachine: Decoding PEM data...
	I0814 09:45:04.752668    3708 main.go:141] libmachine: Parsing certificate...
	I0814 09:45:04.752729    3708 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/cert.pem
	I0814 09:45:04.752774    3708 main.go:141] libmachine: Decoding PEM data...
	I0814 09:45:04.752791    3708 main.go:141] libmachine: Parsing certificate...
	I0814 09:45:04.753318    3708 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19446-1067/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0814 09:45:04.916488    3708 main.go:141] libmachine: Creating SSH key...
	I0814 09:45:05.041543    3708 main.go:141] libmachine: Creating Disk image...
	I0814 09:45:05.041549    3708 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0814 09:45:05.041728    3708 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/test-preload-717000/disk.qcow2.raw /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/test-preload-717000/disk.qcow2
	I0814 09:45:05.051418    3708 main.go:141] libmachine: STDOUT: 
	I0814 09:45:05.051438    3708 main.go:141] libmachine: STDERR: 
	I0814 09:45:05.051491    3708 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/test-preload-717000/disk.qcow2 +20000M
	I0814 09:45:05.059491    3708 main.go:141] libmachine: STDOUT: Image resized.
	
	I0814 09:45:05.059507    3708 main.go:141] libmachine: STDERR: 
	I0814 09:45:05.059518    3708 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/test-preload-717000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/test-preload-717000/disk.qcow2
	I0814 09:45:05.059523    3708 main.go:141] libmachine: Starting QEMU VM...
	I0814 09:45:05.059540    3708 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:45:05.059582    3708 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/test-preload-717000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/test-preload-717000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/test-preload-717000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:3b:5f:92:b1:70 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/test-preload-717000/disk.qcow2
	I0814 09:45:05.061277    3708 main.go:141] libmachine: STDOUT: 
	I0814 09:45:05.061294    3708 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:45:05.061306    3708 client.go:171] duration metric: took 308.844833ms to LocalClient.Create
	I0814 09:45:05.562342    3708 cache.go:157] /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0814 09:45:05.562404    3708 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 8.295589584s
	I0814 09:45:05.562429    3708 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0814 09:45:05.562539    3708 cache.go:87] Successfully saved all images to host disk.
	I0814 09:45:07.063424    3708 start.go:128] duration metric: took 2.372890792s to createHost
	I0814 09:45:07.063493    3708 start.go:83] releasing machines lock for "test-preload-717000", held for 2.373318375s
	W0814 09:45:07.063848    3708 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-717000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-717000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:45:07.082408    3708 out.go:177] 
	W0814 09:45:07.090380    3708 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0814 09:45:07.090410    3708 out.go:239] * 
	* 
	W0814 09:45:07.092983    3708 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 09:45:07.102329    3708 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-717000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-08-14 09:45:07.121268 -0700 PDT m=+2161.275151084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-717000 -n test-preload-717000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-717000 -n test-preload-717000: exit status 7 (66.180625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-717000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-717000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-717000
--- FAIL: TestPreload (10.11s)

                                                
                                    
x
+
TestScheduledStopUnix (9.99s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-405000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-405000 --memory=2048 --driver=qemu2 : exit status 80 (9.83576225s)

                                                
                                                
-- stdout --
	* [scheduled-stop-405000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19446
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-405000" primary control-plane node in "scheduled-stop-405000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-405000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-405000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-405000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19446
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-405000" primary control-plane node in "scheduled-stop-405000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-405000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-405000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-08-14 09:45:17.1037 -0700 PDT m=+2171.257933959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-405000 -n scheduled-stop-405000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-405000 -n scheduled-stop-405000: exit status 7 (68.535042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-405000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-405000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-405000
--- FAIL: TestScheduledStopUnix (9.99s)

                                                
                                    
x
+
TestSkaffold (13.22s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe1153856311 version
skaffold_test.go:59: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe1153856311 version: (1.056869458s)
skaffold_test.go:63: skaffold version: v2.13.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-237000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-237000 --memory=2600 --driver=qemu2 : exit status 80 (9.828333167s)

                                                
                                                
-- stdout --
	* [skaffold-237000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19446
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-237000" primary control-plane node in "skaffold-237000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-237000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-237000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-237000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19446
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-237000" primary control-plane node in "skaffold-237000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-237000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-237000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-08-14 09:45:30.334066 -0700 PDT m=+2184.488765209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-237000 -n skaffold-237000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-237000 -n skaffold-237000: exit status 7 (63.245542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-237000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-237000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-237000
--- FAIL: TestSkaffold (13.22s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (610.15s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.601858479 start -p running-upgrade-579000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.601858479 start -p running-upgrade-579000 --memory=2200 --vm-driver=qemu2 : (1m2.623385709s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-579000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0814 09:47:01.105422    1600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/addons-937000/client.crt: no such file or directory" logger="UnhandledError"
E0814 09:48:59.881369    1600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/functional-363000/client.crt: no such file or directory" logger="UnhandledError"
E0814 09:50:04.186687    1600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/addons-937000/client.crt: no such file or directory" logger="UnhandledError"
E0814 09:52:01.091400    1600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/addons-937000/client.crt: no such file or directory" logger="UnhandledError"
E0814 09:53:59.868119    1600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/functional-363000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-579000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m32.633764s)

                                                
                                                
-- stdout --
	* [running-upgrade-579000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19446
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-579000" primary control-plane node in "running-upgrade-579000" cluster
	* Updating the running qemu2 "running-upgrade-579000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:46:55.660923    4033 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:46:55.661090    4033 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:46:55.661095    4033 out.go:304] Setting ErrFile to fd 2...
	I0814 09:46:55.661097    4033 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:46:55.661242    4033 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:46:55.662764    4033 out.go:298] Setting JSON to false
	I0814 09:46:55.682080    4033 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2772,"bootTime":1723651243,"procs":498,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0814 09:46:55.682215    4033 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0814 09:46:55.686794    4033 out.go:177] * [running-upgrade-579000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0814 09:46:55.692853    4033 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 09:46:55.692891    4033 notify.go:220] Checking for updates...
	I0814 09:46:55.699826    4033 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	I0814 09:46:55.703801    4033 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0814 09:46:55.706799    4033 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 09:46:55.709806    4033 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	I0814 09:46:55.712831    4033 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 09:46:55.716165    4033 config.go:182] Loaded profile config "running-upgrade-579000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0814 09:46:55.719783    4033 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0814 09:46:55.722789    4033 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 09:46:55.726777    4033 out.go:177] * Using the qemu2 driver based on existing profile
	I0814 09:46:55.732783    4033 start.go:297] selected driver: qemu2
	I0814 09:46:55.732795    4033 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-579000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50343 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-579000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0814 09:46:55.732885    4033 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 09:46:55.735534    4033 cni.go:84] Creating CNI manager for ""
	I0814 09:46:55.735557    4033 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0814 09:46:55.735595    4033 start.go:340] cluster config:
	{Name:running-upgrade-579000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50343 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-579000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0814 09:46:55.735658    4033 iso.go:125] acquiring lock: {Name:mkda50f549908e1040ce4e19ea38376a2f640f3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:46:55.742769    4033 out.go:177] * Starting "running-upgrade-579000" primary control-plane node in "running-upgrade-579000" cluster
	I0814 09:46:55.746792    4033 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0814 09:46:55.746850    4033 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0814 09:46:55.746858    4033 cache.go:56] Caching tarball of preloaded images
	I0814 09:46:55.746938    4033 preload.go:172] Found /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0814 09:46:55.746944    4033 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0814 09:46:55.747003    4033 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/running-upgrade-579000/config.json ...
	I0814 09:46:55.747443    4033 start.go:360] acquireMachinesLock for running-upgrade-579000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:47:05.562944    4033 start.go:364] duration metric: took 9.816802875s to acquireMachinesLock for "running-upgrade-579000"
	I0814 09:47:05.562967    4033 start.go:96] Skipping create...Using existing machine configuration
	I0814 09:47:05.562970    4033 fix.go:54] fixHost starting: 
	I0814 09:47:05.563773    4033 fix.go:112] recreateIfNeeded on running-upgrade-579000: state=Running err=<nil>
	W0814 09:47:05.563783    4033 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 09:47:05.571959    4033 out.go:177] * Updating the running qemu2 "running-upgrade-579000" VM ...
	I0814 09:47:05.575891    4033 machine.go:94] provisionDockerMachine start ...
	I0814 09:47:05.575939    4033 main.go:141] libmachine: Using SSH client type: native
	I0814 09:47:05.576054    4033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104aa85a0] 0x104aaae00 <nil>  [] 0s} localhost 50274 <nil> <nil>}
	I0814 09:47:05.576058    4033 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 09:47:05.638406    4033 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-579000
	
	I0814 09:47:05.638422    4033 buildroot.go:166] provisioning hostname "running-upgrade-579000"
	I0814 09:47:05.638446    4033 main.go:141] libmachine: Using SSH client type: native
	I0814 09:47:05.638567    4033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104aa85a0] 0x104aaae00 <nil>  [] 0s} localhost 50274 <nil> <nil>}
	I0814 09:47:05.638574    4033 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-579000 && echo "running-upgrade-579000" | sudo tee /etc/hostname
	I0814 09:47:05.703790    4033 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-579000
	
	I0814 09:47:05.703845    4033 main.go:141] libmachine: Using SSH client type: native
	I0814 09:47:05.703968    4033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104aa85a0] 0x104aaae00 <nil>  [] 0s} localhost 50274 <nil> <nil>}
	I0814 09:47:05.703978    4033 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-579000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-579000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-579000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 09:47:05.764390    4033 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 09:47:05.764403    4033 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19446-1067/.minikube CaCertPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19446-1067/.minikube}
	I0814 09:47:05.764411    4033 buildroot.go:174] setting up certificates
	I0814 09:47:05.764415    4033 provision.go:84] configureAuth start
	I0814 09:47:05.764419    4033 provision.go:143] copyHostCerts
	I0814 09:47:05.764492    4033 exec_runner.go:144] found /Users/jenkins/minikube-integration/19446-1067/.minikube/ca.pem, removing ...
	I0814 09:47:05.764499    4033 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19446-1067/.minikube/ca.pem
	I0814 09:47:05.764617    4033 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19446-1067/.minikube/ca.pem (1082 bytes)
	I0814 09:47:05.764803    4033 exec_runner.go:144] found /Users/jenkins/minikube-integration/19446-1067/.minikube/cert.pem, removing ...
	I0814 09:47:05.764807    4033 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19446-1067/.minikube/cert.pem
	I0814 09:47:05.764856    4033 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19446-1067/.minikube/cert.pem (1123 bytes)
	I0814 09:47:05.764962    4033 exec_runner.go:144] found /Users/jenkins/minikube-integration/19446-1067/.minikube/key.pem, removing ...
	I0814 09:47:05.764966    4033 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19446-1067/.minikube/key.pem
	I0814 09:47:05.765005    4033 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19446-1067/.minikube/key.pem (1675 bytes)
	I0814 09:47:05.765096    4033 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-579000 san=[127.0.0.1 localhost minikube running-upgrade-579000]
	I0814 09:47:05.940070    4033 provision.go:177] copyRemoteCerts
	I0814 09:47:05.940114    4033 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 09:47:05.940124    4033 sshutil.go:53] new ssh client: &{IP:localhost Port:50274 SSHKeyPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/running-upgrade-579000/id_rsa Username:docker}
	I0814 09:47:05.972328    4033 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0814 09:47:05.979486    4033 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0814 09:47:05.986540    4033 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0814 09:47:05.994326    4033 provision.go:87] duration metric: took 229.921834ms to configureAuth
	I0814 09:47:05.994338    4033 buildroot.go:189] setting minikube options for container-runtime
	I0814 09:47:05.994451    4033 config.go:182] Loaded profile config "running-upgrade-579000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0814 09:47:05.994486    4033 main.go:141] libmachine: Using SSH client type: native
	I0814 09:47:05.994588    4033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104aa85a0] 0x104aaae00 <nil>  [] 0s} localhost 50274 <nil> <nil>}
	I0814 09:47:05.994592    4033 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0814 09:47:06.056516    4033 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0814 09:47:06.056527    4033 buildroot.go:70] root file system type: tmpfs
	I0814 09:47:06.056577    4033 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0814 09:47:06.056624    4033 main.go:141] libmachine: Using SSH client type: native
	I0814 09:47:06.056748    4033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104aa85a0] 0x104aaae00 <nil>  [] 0s} localhost 50274 <nil> <nil>}
	I0814 09:47:06.056783    4033 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0814 09:47:06.121758    4033 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0814 09:47:06.121810    4033 main.go:141] libmachine: Using SSH client type: native
	I0814 09:47:06.121930    4033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104aa85a0] 0x104aaae00 <nil>  [] 0s} localhost 50274 <nil> <nil>}
	I0814 09:47:06.121938    4033 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0814 09:47:06.182300    4033 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 09:47:06.182313    4033 machine.go:97] duration metric: took 606.481834ms to provisionDockerMachine
	I0814 09:47:06.182318    4033 start.go:293] postStartSetup for "running-upgrade-579000" (driver="qemu2")
	I0814 09:47:06.182325    4033 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 09:47:06.182380    4033 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 09:47:06.182389    4033 sshutil.go:53] new ssh client: &{IP:localhost Port:50274 SSHKeyPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/running-upgrade-579000/id_rsa Username:docker}
	I0814 09:47:06.238783    4033 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 09:47:06.240250    4033 info.go:137] Remote host: Buildroot 2021.02.12
	I0814 09:47:06.240261    4033 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19446-1067/.minikube/addons for local assets ...
	I0814 09:47:06.240341    4033 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19446-1067/.minikube/files for local assets ...
	I0814 09:47:06.240434    4033 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19446-1067/.minikube/files/etc/ssl/certs/16002.pem -> 16002.pem in /etc/ssl/certs
	I0814 09:47:06.240528    4033 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 09:47:06.243411    4033 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/files/etc/ssl/certs/16002.pem --> /etc/ssl/certs/16002.pem (1708 bytes)
	I0814 09:47:06.252561    4033 start.go:296] duration metric: took 70.242792ms for postStartSetup
	I0814 09:47:06.252594    4033 fix.go:56] duration metric: took 689.686458ms for fixHost
	I0814 09:47:06.252642    4033 main.go:141] libmachine: Using SSH client type: native
	I0814 09:47:06.252761    4033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104aa85a0] 0x104aaae00 <nil>  [] 0s} localhost 50274 <nil> <nil>}
	I0814 09:47:06.252767    4033 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0814 09:47:06.318323    4033 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723654026.438696102
	
	I0814 09:47:06.318337    4033 fix.go:216] guest clock: 1723654026.438696102
	I0814 09:47:06.318342    4033 fix.go:229] Guest: 2024-08-14 09:47:06.438696102 -0700 PDT Remote: 2024-08-14 09:47:06.252596 -0700 PDT m=+10.617043792 (delta=186.100102ms)
	I0814 09:47:06.318361    4033 fix.go:200] guest clock delta is within tolerance: 186.100102ms
	I0814 09:47:06.318364    4033 start.go:83] releasing machines lock for "running-upgrade-579000", held for 755.491333ms
	I0814 09:47:06.318436    4033 ssh_runner.go:195] Run: cat /version.json
	I0814 09:47:06.318450    4033 sshutil.go:53] new ssh client: &{IP:localhost Port:50274 SSHKeyPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/running-upgrade-579000/id_rsa Username:docker}
	I0814 09:47:06.318436    4033 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 09:47:06.318474    4033 sshutil.go:53] new ssh client: &{IP:localhost Port:50274 SSHKeyPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/running-upgrade-579000/id_rsa Username:docker}
	W0814 09:47:06.319126    4033 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:50464->127.0.0.1:50274: write: broken pipe
	I0814 09:47:06.319138    4033 retry.go:31] will retry after 260.071303ms: ssh: handshake failed: write tcp 127.0.0.1:50464->127.0.0.1:50274: write: broken pipe
	W0814 09:47:06.615411    4033 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0814 09:47:06.615479    4033 ssh_runner.go:195] Run: systemctl --version
	I0814 09:47:06.617704    4033 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 09:47:06.619457    4033 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 09:47:06.619483    4033 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0814 09:47:06.623014    4033 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0814 09:47:06.627971    4033 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 09:47:06.627980    4033 start.go:495] detecting cgroup driver to use...
	I0814 09:47:06.628046    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 09:47:06.633541    4033 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0814 09:47:06.637381    4033 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0814 09:47:06.640652    4033 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0814 09:47:06.640688    4033 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0814 09:47:06.643980    4033 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0814 09:47:06.647070    4033 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0814 09:47:06.650442    4033 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0814 09:47:06.653427    4033 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 09:47:06.658252    4033 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0814 09:47:06.661301    4033 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0814 09:47:06.664286    4033 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0814 09:47:06.667336    4033 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 09:47:06.670273    4033 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 09:47:06.672814    4033 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 09:47:06.767246    4033 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0814 09:47:06.779385    4033 start.go:495] detecting cgroup driver to use...
	I0814 09:47:06.779459    4033 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0814 09:47:06.785008    4033 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 09:47:06.799321    4033 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 09:47:06.806550    4033 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 09:47:06.811445    4033 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0814 09:47:06.815791    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 09:47:06.821538    4033 ssh_runner.go:195] Run: which cri-dockerd
	I0814 09:47:06.822771    4033 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0814 09:47:06.825319    4033 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0814 09:47:06.829840    4033 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0814 09:47:06.923236    4033 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0814 09:47:07.037319    4033 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0814 09:47:07.037379    4033 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0814 09:47:07.048106    4033 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 09:47:07.144984    4033 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0814 09:47:10.830204    4033 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.685556792s)
	I0814 09:47:10.830274    4033 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0814 09:47:10.836884    4033 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0814 09:47:10.842962    4033 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0814 09:47:10.847188    4033 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0814 09:47:10.929750    4033 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0814 09:47:11.014324    4033 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 09:47:11.101554    4033 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0814 09:47:11.108145    4033 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0814 09:47:11.112555    4033 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 09:47:11.195560    4033 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0814 09:47:11.242055    4033 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0814 09:47:11.242135    4033 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0814 09:47:11.244441    4033 start.go:563] Will wait 60s for crictl version
	I0814 09:47:11.244493    4033 ssh_runner.go:195] Run: which crictl
	I0814 09:47:11.245835    4033 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 09:47:11.258194    4033 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0814 09:47:11.258266    4033 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0814 09:47:11.271333    4033 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0814 09:47:11.310892    4033 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0814 09:47:11.311010    4033 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0814 09:47:11.312373    4033 kubeadm.go:883] updating cluster {Name:running-upgrade-579000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50343 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-579000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0814 09:47:11.312420    4033 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0814 09:47:11.312460    4033 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0814 09:47:11.323330    4033 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0814 09:47:11.323338    4033 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0814 09:47:11.323384    4033 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0814 09:47:11.326578    4033 ssh_runner.go:195] Run: which lz4
	I0814 09:47:11.327920    4033 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0814 09:47:11.329237    4033 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 09:47:11.329249    4033 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0814 09:47:12.357140    4033 docker.go:649] duration metric: took 1.029334042s to copy over tarball
	I0814 09:47:12.357202    4033 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 09:47:13.525716    4033 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.168603083s)
	I0814 09:47:13.525728    4033 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 09:47:13.543116    4033 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0814 09:47:13.546400    4033 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0814 09:47:13.551743    4033 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 09:47:13.633440    4033 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0814 09:47:14.052311    4033 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0814 09:47:14.073166    4033 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0814 09:47:14.073177    4033 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0814 09:47:14.073182    4033 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0814 09:47:14.077397    4033 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 09:47:14.079097    4033 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0814 09:47:14.081319    4033 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0814 09:47:14.081468    4033 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 09:47:14.084646    4033 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0814 09:47:14.084733    4033 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0814 09:47:14.086780    4033 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0814 09:47:14.086931    4033 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0814 09:47:14.088561    4033 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0814 09:47:14.088627    4033 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0814 09:47:14.090454    4033 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0814 09:47:14.090514    4033 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0814 09:47:14.092130    4033 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0814 09:47:14.092202    4033 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0814 09:47:14.093433    4033 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0814 09:47:14.094564    4033 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0814 09:47:14.518111    4033 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0814 09:47:14.529259    4033 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0814 09:47:14.529299    4033 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0814 09:47:14.529354    4033 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0814 09:47:14.546499    4033 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0814 09:47:14.548098    4033 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0814 09:47:14.551590    4033 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0814 09:47:14.564307    4033 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0814 09:47:14.574437    4033 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0814 09:47:14.577997    4033 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0814 09:47:14.578031    4033 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0814 09:47:14.578143    4033 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	W0814 09:47:14.582969    4033 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0814 09:47:14.583088    4033 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0814 09:47:14.583177    4033 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0814 09:47:14.583193    4033 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0814 09:47:14.583215    4033 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0814 09:47:14.587989    4033 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0814 09:47:14.588013    4033 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0814 09:47:14.588073    4033 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0814 09:47:14.608815    4033 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0814 09:47:14.621612    4033 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0814 09:47:14.621636    4033 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0814 09:47:14.621697    4033 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0814 09:47:14.627347    4033 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0814 09:47:14.627364    4033 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0814 09:47:14.627369    4033 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0814 09:47:14.627480    4033 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0814 09:47:14.629429    4033 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0814 09:47:14.631428    4033 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0814 09:47:14.639142    4033 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0814 09:47:14.639165    4033 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0814 09:47:14.639214    4033 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0814 09:47:14.648419    4033 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0814 09:47:14.648553    4033 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0814 09:47:14.658358    4033 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0814 09:47:14.658453    4033 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0814 09:47:14.666841    4033 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0814 09:47:14.666895    4033 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0814 09:47:14.666905    4033 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0814 09:47:14.666921    4033 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0814 09:47:14.666950    4033 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0814 09:47:14.666962    4033 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0814 09:47:14.670135    4033 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0814 09:47:14.670200    4033 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0814 09:47:14.712002    4033 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0814 09:47:14.712018    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0814 09:47:14.790234    4033 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0814 09:47:14.798874    4033 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0814 09:47:14.798899    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	W0814 09:47:14.804987    4033 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0814 09:47:14.805164    4033 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 09:47:14.926901    4033 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0814 09:47:14.926923    4033 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 09:47:14.926984    4033 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 09:47:14.927643    4033 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0814 09:47:15.075764    4033 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0814 09:47:15.075783    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0814 09:47:15.243711    4033 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0814 09:47:15.243751    4033 cache_images.go:92] duration metric: took 1.170656125s to LoadCachedImages
	W0814 09:47:15.243796    4033 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0814 09:47:15.243801    4033 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0814 09:47:15.243860    4033 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-579000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-579000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 09:47:15.243922    4033 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0814 09:47:15.259540    4033 cni.go:84] Creating CNI manager for ""
	I0814 09:47:15.259554    4033 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0814 09:47:15.259559    4033 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 09:47:15.259567    4033 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-579000 NodeName:running-upgrade-579000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 09:47:15.259648    4033 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-579000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 09:47:15.259708    4033 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0814 09:47:15.263860    4033 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 09:47:15.263892    4033 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 09:47:15.267335    4033 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0814 09:47:15.273031    4033 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 09:47:15.278174    4033 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0814 09:47:15.283941    4033 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0814 09:47:15.285620    4033 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 09:47:15.374801    4033 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 09:47:15.380993    4033 certs.go:68] Setting up /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/running-upgrade-579000 for IP: 10.0.2.15
	I0814 09:47:15.381003    4033 certs.go:194] generating shared ca certs ...
	I0814 09:47:15.381011    4033 certs.go:226] acquiring lock for ca certs: {Name:mk41737d7568a132ec38012a87fa9d3345f331c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:47:15.381150    4033 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19446-1067/.minikube/ca.key
	I0814 09:47:15.381186    4033 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19446-1067/.minikube/proxy-client-ca.key
	I0814 09:47:15.381191    4033 certs.go:256] generating profile certs ...
	I0814 09:47:15.381266    4033 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/running-upgrade-579000/client.key
	I0814 09:47:15.381285    4033 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/running-upgrade-579000/apiserver.key.bba820ac
	I0814 09:47:15.381297    4033 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/running-upgrade-579000/apiserver.crt.bba820ac with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0814 09:47:15.460837    4033 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/running-upgrade-579000/apiserver.crt.bba820ac ...
	I0814 09:47:15.460848    4033 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/running-upgrade-579000/apiserver.crt.bba820ac: {Name:mk32cac80ede8d2dd9c479d6a88b6194cbfdf702 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:47:15.461434    4033 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/running-upgrade-579000/apiserver.key.bba820ac ...
	I0814 09:47:15.461440    4033 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/running-upgrade-579000/apiserver.key.bba820ac: {Name:mk27ad572bf7f7f21f28ce0746eb0bf92af71656 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:47:15.462652    4033 certs.go:381] copying /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/running-upgrade-579000/apiserver.crt.bba820ac -> /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/running-upgrade-579000/apiserver.crt
	I0814 09:47:15.462795    4033 certs.go:385] copying /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/running-upgrade-579000/apiserver.key.bba820ac -> /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/running-upgrade-579000/apiserver.key
	I0814 09:47:15.462957    4033 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/running-upgrade-579000/proxy-client.key
	I0814 09:47:15.463084    4033 certs.go:484] found cert: /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/1600.pem (1338 bytes)
	W0814 09:47:15.463108    4033 certs.go:480] ignoring /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/1600_empty.pem, impossibly tiny 0 bytes
	I0814 09:47:15.463113    4033 certs.go:484] found cert: /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca-key.pem (1675 bytes)
	I0814 09:47:15.463134    4033 certs.go:484] found cert: /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem (1082 bytes)
	I0814 09:47:15.463154    4033 certs.go:484] found cert: /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/cert.pem (1123 bytes)
	I0814 09:47:15.463179    4033 certs.go:484] found cert: /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/key.pem (1675 bytes)
	I0814 09:47:15.463216    4033 certs.go:484] found cert: /Users/jenkins/minikube-integration/19446-1067/.minikube/files/etc/ssl/certs/16002.pem (1708 bytes)
	I0814 09:47:15.463551    4033 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 09:47:15.472069    4033 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 09:47:15.479465    4033 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 09:47:15.486890    4033 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0814 09:47:15.494549    4033 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/running-upgrade-579000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0814 09:47:15.501859    4033 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/running-upgrade-579000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 09:47:15.508933    4033 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/running-upgrade-579000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 09:47:15.516130    4033 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/running-upgrade-579000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 09:47:15.523625    4033 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 09:47:15.531124    4033 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/1600.pem --> /usr/share/ca-certificates/1600.pem (1338 bytes)
	I0814 09:47:15.538641    4033 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/files/etc/ssl/certs/16002.pem --> /usr/share/ca-certificates/16002.pem (1708 bytes)
	I0814 09:47:15.546100    4033 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 09:47:15.551434    4033 ssh_runner.go:195] Run: openssl version
	I0814 09:47:15.553496    4033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16002.pem && ln -fs /usr/share/ca-certificates/16002.pem /etc/ssl/certs/16002.pem"
	I0814 09:47:15.556634    4033 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16002.pem
	I0814 09:47:15.558255    4033 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 14 16:16 /usr/share/ca-certificates/16002.pem
	I0814 09:47:15.558277    4033 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16002.pem
	I0814 09:47:15.560415    4033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16002.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 09:47:15.563950    4033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 09:47:15.567699    4033 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:47:15.569715    4033 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 14 16:10 /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:47:15.569740    4033 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:47:15.571685    4033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 09:47:15.574985    4033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1600.pem && ln -fs /usr/share/ca-certificates/1600.pem /etc/ssl/certs/1600.pem"
	I0814 09:47:15.578466    4033 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1600.pem
	I0814 09:47:15.580017    4033 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 14 16:16 /usr/share/ca-certificates/1600.pem
	I0814 09:47:15.580045    4033 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1600.pem
	I0814 09:47:15.581983    4033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1600.pem /etc/ssl/certs/51391683.0"
	I0814 09:47:15.584918    4033 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 09:47:15.586658    4033 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 09:47:15.589018    4033 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 09:47:15.591187    4033 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 09:47:15.593313    4033 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 09:47:15.595632    4033 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 09:47:15.597735    4033 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 09:47:15.599677    4033 kubeadm.go:392] StartCluster: {Name:running-upgrade-579000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50343 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-579000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0814 09:47:15.599748    4033 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0814 09:47:15.611506    4033 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 09:47:15.616405    4033 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 09:47:15.616413    4033 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 09:47:15.616445    4033 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 09:47:15.620164    4033 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:47:15.620451    4033 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-579000" does not appear in /Users/jenkins/minikube-integration/19446-1067/kubeconfig
	I0814 09:47:15.620550    4033 kubeconfig.go:62] /Users/jenkins/minikube-integration/19446-1067/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-579000" cluster setting kubeconfig missing "running-upgrade-579000" context setting]
	I0814 09:47:15.620753    4033 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19446-1067/kubeconfig: {Name:mkd5271b15535f495ab8e34d870e7dbcadc9c40a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:47:15.621240    4033 kapi.go:59] client config for running-upgrade-579000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/running-upgrade-579000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/running-upgrade-579000/client.key", CAFile:"/Users/jenkins/minikube-integration/19446-1067/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10605fe30), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0814 09:47:15.621573    4033 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 09:47:15.624541    4033 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-579000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0814 09:47:15.624547    4033 kubeadm.go:1160] stopping kube-system containers ...
	I0814 09:47:15.624600    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0814 09:47:15.637073    4033 docker.go:483] Stopping containers: [5f30d30839a6 4f54c1f789c0 7a70521ea1ee e7b35ee6cc2a 94d90481822b e95e6926ff67 f85fd17e0cb2 053e1a0d063c a455c7b28a0f f1fd8f95e57d 8cc17453c508 1d1ddf9610ca 03512631e6e0 9f9159bc24e9 a62bb551afa1 6ea1dddbda9a]
	I0814 09:47:15.637135    4033 ssh_runner.go:195] Run: docker stop 5f30d30839a6 4f54c1f789c0 7a70521ea1ee e7b35ee6cc2a 94d90481822b e95e6926ff67 f85fd17e0cb2 053e1a0d063c a455c7b28a0f f1fd8f95e57d 8cc17453c508 1d1ddf9610ca 03512631e6e0 9f9159bc24e9 a62bb551afa1 6ea1dddbda9a
	I0814 09:47:15.648553    4033 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 09:47:15.730598    4033 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 09:47:15.734458    4033 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Aug 14 16:46 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Aug 14 16:46 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Aug 14 16:46 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Aug 14 16:46 /etc/kubernetes/scheduler.conf
	
	I0814 09:47:15.734486    4033 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50343 /etc/kubernetes/admin.conf
	I0814 09:47:15.737298    4033 kubeadm.go:163] "https://control-plane.minikube.internal:50343" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50343 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:47:15.737322    4033 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 09:47:15.740152    4033 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50343 /etc/kubernetes/kubelet.conf
	I0814 09:47:15.743195    4033 kubeadm.go:163] "https://control-plane.minikube.internal:50343" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50343 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:47:15.743221    4033 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 09:47:15.746548    4033 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50343 /etc/kubernetes/controller-manager.conf
	I0814 09:47:15.749754    4033 kubeadm.go:163] "https://control-plane.minikube.internal:50343" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50343 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:47:15.749792    4033 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 09:47:15.752702    4033 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50343 /etc/kubernetes/scheduler.conf
	I0814 09:47:15.755458    4033 kubeadm.go:163] "https://control-plane.minikube.internal:50343" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50343 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:47:15.755480    4033 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 09:47:15.758754    4033 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 09:47:15.762165    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 09:47:15.794353    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 09:47:16.206104    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 09:47:16.407624    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 09:47:16.428634    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 09:47:16.453004    4033 api_server.go:52] waiting for apiserver process to appear ...
	I0814 09:47:16.453084    4033 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:47:16.955214    4033 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:47:17.455044    4033 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:47:17.955082    4033 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:47:17.960412    4033 api_server.go:72] duration metric: took 1.50751875s to wait for apiserver process to appear ...
	I0814 09:47:17.960424    4033 api_server.go:88] waiting for apiserver healthz status ...
	I0814 09:47:17.960434    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:47:22.962164    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:47:22.962196    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:47:27.962080    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:47:27.962125    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:47:32.962103    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:47:32.962129    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:47:37.962283    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:47:37.962359    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:47:42.963238    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:47:42.963288    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:47:47.964023    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:47:47.964089    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:47:52.965213    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:47:52.965282    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:47:57.966913    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:47:57.966992    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:48:02.968999    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:48:02.969046    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:48:07.969904    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:48:07.969992    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:48:12.970998    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:48:12.971045    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:48:17.973007    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:48:17.973422    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:48:18.013557    4033 logs.go:276] 2 containers: [351b37282b78 8cc17453c508]
	I0814 09:48:18.013704    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:48:18.034987    4033 logs.go:276] 2 containers: [9f25f6d89e86 03512631e6e0]
	I0814 09:48:18.035087    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:48:18.050832    4033 logs.go:276] 1 containers: [b4e8c942b977]
	I0814 09:48:18.050911    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:48:18.063932    4033 logs.go:276] 2 containers: [c4ee65912640 f85fd17e0cb2]
	I0814 09:48:18.063998    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:48:18.075357    4033 logs.go:276] 1 containers: [451ede5263eb]
	I0814 09:48:18.075444    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:48:18.086608    4033 logs.go:276] 2 containers: [2b8801fcf8bf 053e1a0d063c]
	I0814 09:48:18.086675    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:48:18.097490    4033 logs.go:276] 0 containers: []
	W0814 09:48:18.097500    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:48:18.097562    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:48:18.109155    4033 logs.go:276] 2 containers: [262a2b312a14 7c16146209d8]
	I0814 09:48:18.109180    4033 logs.go:123] Gathering logs for kube-scheduler [c4ee65912640] ...
	I0814 09:48:18.109187    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4ee65912640"
	I0814 09:48:18.121308    4033 logs.go:123] Gathering logs for kube-controller-manager [2b8801fcf8bf] ...
	I0814 09:48:18.121320    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b8801fcf8bf"
	I0814 09:48:18.141028    4033 logs.go:123] Gathering logs for storage-provisioner [262a2b312a14] ...
	I0814 09:48:18.141038    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262a2b312a14"
	I0814 09:48:18.152791    4033 logs.go:123] Gathering logs for storage-provisioner [7c16146209d8] ...
	I0814 09:48:18.152802    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c16146209d8"
	I0814 09:48:18.164533    4033 logs.go:123] Gathering logs for kube-apiserver [351b37282b78] ...
	I0814 09:48:18.164544    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351b37282b78"
	I0814 09:48:18.178346    4033 logs.go:123] Gathering logs for etcd [9f25f6d89e86] ...
	I0814 09:48:18.178355    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f25f6d89e86"
	I0814 09:48:18.192510    4033 logs.go:123] Gathering logs for coredns [b4e8c942b977] ...
	I0814 09:48:18.192519    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8c942b977"
	I0814 09:48:18.213692    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:48:18.213702    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:48:18.281785    4033 logs.go:123] Gathering logs for etcd [03512631e6e0] ...
	I0814 09:48:18.281801    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03512631e6e0"
	I0814 09:48:18.297021    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:48:18.297035    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:48:18.334883    4033 logs.go:123] Gathering logs for kube-proxy [451ede5263eb] ...
	I0814 09:48:18.334891    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451ede5263eb"
	I0814 09:48:18.346723    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:48:18.346736    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:48:18.359382    4033 logs.go:123] Gathering logs for kube-controller-manager [053e1a0d063c] ...
	I0814 09:48:18.359398    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053e1a0d063c"
	I0814 09:48:18.371864    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:48:18.371874    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:48:18.398924    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:48:18.398933    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:48:18.403463    4033 logs.go:123] Gathering logs for kube-apiserver [8cc17453c508] ...
	I0814 09:48:18.403471    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc17453c508"
	I0814 09:48:18.429089    4033 logs.go:123] Gathering logs for kube-scheduler [f85fd17e0cb2] ...
	I0814 09:48:18.429101    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f85fd17e0cb2"
	I0814 09:48:20.953641    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:48:25.956156    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:48:25.956567    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:48:25.992725    4033 logs.go:276] 2 containers: [351b37282b78 8cc17453c508]
	I0814 09:48:25.992859    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:48:26.014026    4033 logs.go:276] 2 containers: [9f25f6d89e86 03512631e6e0]
	I0814 09:48:26.014145    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:48:26.028847    4033 logs.go:276] 1 containers: [b4e8c942b977]
	I0814 09:48:26.028925    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:48:26.041046    4033 logs.go:276] 2 containers: [c4ee65912640 f85fd17e0cb2]
	I0814 09:48:26.041114    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:48:26.051768    4033 logs.go:276] 1 containers: [451ede5263eb]
	I0814 09:48:26.051844    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:48:26.062617    4033 logs.go:276] 2 containers: [2b8801fcf8bf 053e1a0d063c]
	I0814 09:48:26.062684    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:48:26.073279    4033 logs.go:276] 0 containers: []
	W0814 09:48:26.073288    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:48:26.073341    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:48:26.084032    4033 logs.go:276] 2 containers: [262a2b312a14 7c16146209d8]
	I0814 09:48:26.084050    4033 logs.go:123] Gathering logs for storage-provisioner [7c16146209d8] ...
	I0814 09:48:26.084056    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c16146209d8"
	I0814 09:48:26.095964    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:48:26.095978    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:48:26.123085    4033 logs.go:123] Gathering logs for kube-apiserver [8cc17453c508] ...
	I0814 09:48:26.123095    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc17453c508"
	I0814 09:48:26.149635    4033 logs.go:123] Gathering logs for etcd [9f25f6d89e86] ...
	I0814 09:48:26.149648    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f25f6d89e86"
	I0814 09:48:26.163956    4033 logs.go:123] Gathering logs for kube-controller-manager [2b8801fcf8bf] ...
	I0814 09:48:26.163973    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b8801fcf8bf"
	I0814 09:48:26.181914    4033 logs.go:123] Gathering logs for etcd [03512631e6e0] ...
	I0814 09:48:26.181924    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03512631e6e0"
	I0814 09:48:26.196509    4033 logs.go:123] Gathering logs for kube-scheduler [c4ee65912640] ...
	I0814 09:48:26.196523    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4ee65912640"
	I0814 09:48:26.209711    4033 logs.go:123] Gathering logs for storage-provisioner [262a2b312a14] ...
	I0814 09:48:26.209724    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262a2b312a14"
	I0814 09:48:26.221463    4033 logs.go:123] Gathering logs for kube-proxy [451ede5263eb] ...
	I0814 09:48:26.221477    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451ede5263eb"
	I0814 09:48:26.232983    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:48:26.232996    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:48:26.269193    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:48:26.269199    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:48:26.306796    4033 logs.go:123] Gathering logs for kube-scheduler [f85fd17e0cb2] ...
	I0814 09:48:26.306810    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f85fd17e0cb2"
	I0814 09:48:26.321542    4033 logs.go:123] Gathering logs for kube-controller-manager [053e1a0d063c] ...
	I0814 09:48:26.321552    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053e1a0d063c"
	I0814 09:48:26.332557    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:48:26.332569    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:48:26.344360    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:48:26.344371    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:48:26.348650    4033 logs.go:123] Gathering logs for kube-apiserver [351b37282b78] ...
	I0814 09:48:26.348657    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351b37282b78"
	I0814 09:48:26.362801    4033 logs.go:123] Gathering logs for coredns [b4e8c942b977] ...
	I0814 09:48:26.362812    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8c942b977"
	I0814 09:48:28.876059    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:48:33.878338    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:48:33.878705    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:48:33.909755    4033 logs.go:276] 2 containers: [351b37282b78 8cc17453c508]
	I0814 09:48:33.909900    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:48:33.931030    4033 logs.go:276] 2 containers: [9f25f6d89e86 03512631e6e0]
	I0814 09:48:33.931136    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:48:33.945046    4033 logs.go:276] 1 containers: [b4e8c942b977]
	I0814 09:48:33.945131    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:48:33.956547    4033 logs.go:276] 2 containers: [c4ee65912640 f85fd17e0cb2]
	I0814 09:48:33.956617    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:48:33.968187    4033 logs.go:276] 1 containers: [451ede5263eb]
	I0814 09:48:33.968252    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:48:33.979336    4033 logs.go:276] 2 containers: [2b8801fcf8bf 053e1a0d063c]
	I0814 09:48:33.979402    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:48:33.990168    4033 logs.go:276] 0 containers: []
	W0814 09:48:33.990180    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:48:33.990230    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:48:34.000537    4033 logs.go:276] 2 containers: [262a2b312a14 7c16146209d8]
	I0814 09:48:34.000553    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:48:34.000558    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:48:34.043354    4033 logs.go:123] Gathering logs for etcd [03512631e6e0] ...
	I0814 09:48:34.043367    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03512631e6e0"
	I0814 09:48:34.060110    4033 logs.go:123] Gathering logs for kube-scheduler [c4ee65912640] ...
	I0814 09:48:34.060120    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4ee65912640"
	I0814 09:48:34.072286    4033 logs.go:123] Gathering logs for kube-controller-manager [2b8801fcf8bf] ...
	I0814 09:48:34.072296    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b8801fcf8bf"
	I0814 09:48:34.089903    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:48:34.089912    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:48:34.102786    4033 logs.go:123] Gathering logs for kube-apiserver [351b37282b78] ...
	I0814 09:48:34.102796    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351b37282b78"
	I0814 09:48:34.117783    4033 logs.go:123] Gathering logs for kube-apiserver [8cc17453c508] ...
	I0814 09:48:34.117795    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc17453c508"
	I0814 09:48:34.144057    4033 logs.go:123] Gathering logs for etcd [9f25f6d89e86] ...
	I0814 09:48:34.144067    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f25f6d89e86"
	I0814 09:48:34.158500    4033 logs.go:123] Gathering logs for coredns [b4e8c942b977] ...
	I0814 09:48:34.158510    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8c942b977"
	I0814 09:48:34.170589    4033 logs.go:123] Gathering logs for storage-provisioner [7c16146209d8] ...
	I0814 09:48:34.170602    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c16146209d8"
	I0814 09:48:34.182614    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:48:34.182626    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:48:34.209924    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:48:34.209939    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:48:34.215102    4033 logs.go:123] Gathering logs for kube-proxy [451ede5263eb] ...
	I0814 09:48:34.215111    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451ede5263eb"
	I0814 09:48:34.227321    4033 logs.go:123] Gathering logs for storage-provisioner [262a2b312a14] ...
	I0814 09:48:34.227332    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262a2b312a14"
	I0814 09:48:34.238987    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:48:34.238997    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:48:34.276287    4033 logs.go:123] Gathering logs for kube-scheduler [f85fd17e0cb2] ...
	I0814 09:48:34.276297    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f85fd17e0cb2"
	I0814 09:48:34.290004    4033 logs.go:123] Gathering logs for kube-controller-manager [053e1a0d063c] ...
	I0814 09:48:34.290017    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053e1a0d063c"
	I0814 09:48:36.802851    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:48:41.804950    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:48:41.805077    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:48:41.824437    4033 logs.go:276] 2 containers: [351b37282b78 8cc17453c508]
	I0814 09:48:41.824541    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:48:41.839045    4033 logs.go:276] 2 containers: [9f25f6d89e86 03512631e6e0]
	I0814 09:48:41.839113    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:48:41.851302    4033 logs.go:276] 1 containers: [b4e8c942b977]
	I0814 09:48:41.851376    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:48:41.862245    4033 logs.go:276] 2 containers: [c4ee65912640 f85fd17e0cb2]
	I0814 09:48:41.862310    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:48:41.872524    4033 logs.go:276] 1 containers: [451ede5263eb]
	I0814 09:48:41.872593    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:48:41.883101    4033 logs.go:276] 2 containers: [2b8801fcf8bf 053e1a0d063c]
	I0814 09:48:41.883164    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:48:41.896692    4033 logs.go:276] 0 containers: []
	W0814 09:48:41.896704    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:48:41.896766    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:48:41.907365    4033 logs.go:276] 2 containers: [262a2b312a14 7c16146209d8]
	I0814 09:48:41.907382    4033 logs.go:123] Gathering logs for kube-apiserver [351b37282b78] ...
	I0814 09:48:41.907387    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351b37282b78"
	I0814 09:48:41.921252    4033 logs.go:123] Gathering logs for etcd [9f25f6d89e86] ...
	I0814 09:48:41.921263    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f25f6d89e86"
	I0814 09:48:41.935181    4033 logs.go:123] Gathering logs for storage-provisioner [7c16146209d8] ...
	I0814 09:48:41.935190    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c16146209d8"
	I0814 09:48:41.946578    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:48:41.946589    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:48:41.982583    4033 logs.go:123] Gathering logs for coredns [b4e8c942b977] ...
	I0814 09:48:41.982596    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8c942b977"
	I0814 09:48:41.997772    4033 logs.go:123] Gathering logs for kube-scheduler [f85fd17e0cb2] ...
	I0814 09:48:41.997784    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f85fd17e0cb2"
	I0814 09:48:42.015951    4033 logs.go:123] Gathering logs for storage-provisioner [262a2b312a14] ...
	I0814 09:48:42.015963    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262a2b312a14"
	I0814 09:48:42.028945    4033 logs.go:123] Gathering logs for kube-apiserver [8cc17453c508] ...
	I0814 09:48:42.028957    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc17453c508"
	I0814 09:48:42.053434    4033 logs.go:123] Gathering logs for kube-scheduler [c4ee65912640] ...
	I0814 09:48:42.053446    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4ee65912640"
	I0814 09:48:42.065138    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:48:42.065148    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:48:42.089782    4033 logs.go:123] Gathering logs for kube-controller-manager [053e1a0d063c] ...
	I0814 09:48:42.089790    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053e1a0d063c"
	I0814 09:48:42.101087    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:48:42.101100    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:48:42.113155    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:48:42.113166    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:48:42.152634    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:48:42.152647    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:48:42.157469    4033 logs.go:123] Gathering logs for etcd [03512631e6e0] ...
	I0814 09:48:42.157477    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03512631e6e0"
	I0814 09:48:42.172522    4033 logs.go:123] Gathering logs for kube-proxy [451ede5263eb] ...
	I0814 09:48:42.172532    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451ede5263eb"
	I0814 09:48:42.189029    4033 logs.go:123] Gathering logs for kube-controller-manager [2b8801fcf8bf] ...
	I0814 09:48:42.189040    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b8801fcf8bf"
	I0814 09:48:44.708265    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:48:49.710425    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:48:49.710668    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:48:49.731690    4033 logs.go:276] 2 containers: [351b37282b78 8cc17453c508]
	I0814 09:48:49.731770    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:48:49.744824    4033 logs.go:276] 2 containers: [9f25f6d89e86 03512631e6e0]
	I0814 09:48:49.744889    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:48:49.755953    4033 logs.go:276] 1 containers: [b4e8c942b977]
	I0814 09:48:49.756025    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:48:49.766256    4033 logs.go:276] 2 containers: [c4ee65912640 f85fd17e0cb2]
	I0814 09:48:49.766330    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:48:49.776785    4033 logs.go:276] 1 containers: [451ede5263eb]
	I0814 09:48:49.776860    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:48:49.787490    4033 logs.go:276] 2 containers: [2b8801fcf8bf 053e1a0d063c]
	I0814 09:48:49.787561    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:48:49.797657    4033 logs.go:276] 0 containers: []
	W0814 09:48:49.797667    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:48:49.797727    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:48:49.808071    4033 logs.go:276] 2 containers: [262a2b312a14 7c16146209d8]
	I0814 09:48:49.808090    4033 logs.go:123] Gathering logs for kube-controller-manager [053e1a0d063c] ...
	I0814 09:48:49.808095    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053e1a0d063c"
	I0814 09:48:49.820590    4033 logs.go:123] Gathering logs for etcd [9f25f6d89e86] ...
	I0814 09:48:49.820602    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f25f6d89e86"
	I0814 09:48:49.835792    4033 logs.go:123] Gathering logs for kube-scheduler [f85fd17e0cb2] ...
	I0814 09:48:49.835805    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f85fd17e0cb2"
	I0814 09:48:49.850267    4033 logs.go:123] Gathering logs for storage-provisioner [262a2b312a14] ...
	I0814 09:48:49.850277    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262a2b312a14"
	I0814 09:48:49.861548    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:48:49.861559    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:48:49.898560    4033 logs.go:123] Gathering logs for kube-controller-manager [2b8801fcf8bf] ...
	I0814 09:48:49.898572    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b8801fcf8bf"
	I0814 09:48:49.915675    4033 logs.go:123] Gathering logs for kube-apiserver [8cc17453c508] ...
	I0814 09:48:49.915685    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc17453c508"
	I0814 09:48:49.940232    4033 logs.go:123] Gathering logs for etcd [03512631e6e0] ...
	I0814 09:48:49.940244    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03512631e6e0"
	I0814 09:48:49.954373    4033 logs.go:123] Gathering logs for kube-scheduler [c4ee65912640] ...
	I0814 09:48:49.954387    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4ee65912640"
	I0814 09:48:49.965813    4033 logs.go:123] Gathering logs for kube-proxy [451ede5263eb] ...
	I0814 09:48:49.965825    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451ede5263eb"
	I0814 09:48:49.978365    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:48:49.978376    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:48:50.015727    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:48:50.015737    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:48:50.020022    4033 logs.go:123] Gathering logs for storage-provisioner [7c16146209d8] ...
	I0814 09:48:50.020029    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c16146209d8"
	I0814 09:48:50.031129    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:48:50.031141    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:48:50.056419    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:48:50.056427    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:48:50.068144    4033 logs.go:123] Gathering logs for kube-apiserver [351b37282b78] ...
	I0814 09:48:50.068158    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351b37282b78"
	I0814 09:48:50.081905    4033 logs.go:123] Gathering logs for coredns [b4e8c942b977] ...
	I0814 09:48:50.081919    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8c942b977"
	I0814 09:48:52.595120    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:48:57.597372    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:48:57.597837    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:48:57.637671    4033 logs.go:276] 2 containers: [351b37282b78 8cc17453c508]
	I0814 09:48:57.637808    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:48:57.660037    4033 logs.go:276] 2 containers: [9f25f6d89e86 03512631e6e0]
	I0814 09:48:57.660153    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:48:57.674971    4033 logs.go:276] 1 containers: [b4e8c942b977]
	I0814 09:48:57.675044    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:48:57.687913    4033 logs.go:276] 2 containers: [c4ee65912640 f85fd17e0cb2]
	I0814 09:48:57.687987    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:48:57.698978    4033 logs.go:276] 1 containers: [451ede5263eb]
	I0814 09:48:57.699045    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:48:57.709821    4033 logs.go:276] 2 containers: [2b8801fcf8bf 053e1a0d063c]
	I0814 09:48:57.709898    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:48:57.720027    4033 logs.go:276] 0 containers: []
	W0814 09:48:57.720038    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:48:57.720096    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:48:57.731087    4033 logs.go:276] 2 containers: [262a2b312a14 7c16146209d8]
	I0814 09:48:57.731104    4033 logs.go:123] Gathering logs for storage-provisioner [7c16146209d8] ...
	I0814 09:48:57.731110    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c16146209d8"
	I0814 09:48:57.742605    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:48:57.742616    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:48:57.780654    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:48:57.780663    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:48:57.785298    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:48:57.785309    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:48:57.820845    4033 logs.go:123] Gathering logs for kube-apiserver [351b37282b78] ...
	I0814 09:48:57.820854    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351b37282b78"
	I0814 09:48:57.834748    4033 logs.go:123] Gathering logs for kube-controller-manager [2b8801fcf8bf] ...
	I0814 09:48:57.834758    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b8801fcf8bf"
	I0814 09:48:57.852402    4033 logs.go:123] Gathering logs for kube-apiserver [8cc17453c508] ...
	I0814 09:48:57.852412    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc17453c508"
	I0814 09:48:57.876725    4033 logs.go:123] Gathering logs for etcd [9f25f6d89e86] ...
	I0814 09:48:57.876734    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f25f6d89e86"
	I0814 09:48:57.890892    4033 logs.go:123] Gathering logs for kube-scheduler [c4ee65912640] ...
	I0814 09:48:57.890904    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4ee65912640"
	I0814 09:48:57.902956    4033 logs.go:123] Gathering logs for etcd [03512631e6e0] ...
	I0814 09:48:57.902972    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03512631e6e0"
	I0814 09:48:57.917236    4033 logs.go:123] Gathering logs for coredns [b4e8c942b977] ...
	I0814 09:48:57.917247    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8c942b977"
	I0814 09:48:57.928331    4033 logs.go:123] Gathering logs for storage-provisioner [262a2b312a14] ...
	I0814 09:48:57.928342    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262a2b312a14"
	I0814 09:48:57.944448    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:48:57.944463    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:48:57.970120    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:48:57.970129    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:48:57.981975    4033 logs.go:123] Gathering logs for kube-scheduler [f85fd17e0cb2] ...
	I0814 09:48:57.981989    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f85fd17e0cb2"
	I0814 09:48:58.000240    4033 logs.go:123] Gathering logs for kube-proxy [451ede5263eb] ...
	I0814 09:48:58.000251    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451ede5263eb"
	I0814 09:48:58.012366    4033 logs.go:123] Gathering logs for kube-controller-manager [053e1a0d063c] ...
	I0814 09:48:58.012379    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053e1a0d063c"
	I0814 09:49:00.525445    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:49:05.527745    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:49:05.528126    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:49:05.561471    4033 logs.go:276] 2 containers: [351b37282b78 8cc17453c508]
	I0814 09:49:05.561598    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:49:05.580956    4033 logs.go:276] 2 containers: [9f25f6d89e86 03512631e6e0]
	I0814 09:49:05.581079    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:49:05.594819    4033 logs.go:276] 1 containers: [b4e8c942b977]
	I0814 09:49:05.594896    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:49:05.608123    4033 logs.go:276] 2 containers: [c4ee65912640 f85fd17e0cb2]
	I0814 09:49:05.608200    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:49:05.618923    4033 logs.go:276] 1 containers: [451ede5263eb]
	I0814 09:49:05.618996    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:49:05.630082    4033 logs.go:276] 2 containers: [2b8801fcf8bf 053e1a0d063c]
	I0814 09:49:05.630151    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:49:05.640642    4033 logs.go:276] 0 containers: []
	W0814 09:49:05.640661    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:49:05.640720    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:49:05.651714    4033 logs.go:276] 2 containers: [262a2b312a14 7c16146209d8]
	I0814 09:49:05.651734    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:49:05.651740    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:49:05.687642    4033 logs.go:123] Gathering logs for kube-apiserver [351b37282b78] ...
	I0814 09:49:05.687653    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351b37282b78"
	I0814 09:49:05.703630    4033 logs.go:123] Gathering logs for coredns [b4e8c942b977] ...
	I0814 09:49:05.703640    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8c942b977"
	I0814 09:49:05.715804    4033 logs.go:123] Gathering logs for kube-scheduler [c4ee65912640] ...
	I0814 09:49:05.715816    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4ee65912640"
	I0814 09:49:05.728032    4033 logs.go:123] Gathering logs for kube-controller-manager [053e1a0d063c] ...
	I0814 09:49:05.728043    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053e1a0d063c"
	I0814 09:49:05.739572    4033 logs.go:123] Gathering logs for storage-provisioner [262a2b312a14] ...
	I0814 09:49:05.739584    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262a2b312a14"
	I0814 09:49:05.751056    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:49:05.751067    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:49:05.755653    4033 logs.go:123] Gathering logs for kube-scheduler [f85fd17e0cb2] ...
	I0814 09:49:05.755659    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f85fd17e0cb2"
	I0814 09:49:05.769278    4033 logs.go:123] Gathering logs for storage-provisioner [7c16146209d8] ...
	I0814 09:49:05.769289    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c16146209d8"
	I0814 09:49:05.785194    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:49:05.785209    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:49:05.797409    4033 logs.go:123] Gathering logs for etcd [9f25f6d89e86] ...
	I0814 09:49:05.797419    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f25f6d89e86"
	I0814 09:49:05.811468    4033 logs.go:123] Gathering logs for kube-apiserver [8cc17453c508] ...
	I0814 09:49:05.811480    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc17453c508"
	I0814 09:49:05.835019    4033 logs.go:123] Gathering logs for kube-proxy [451ede5263eb] ...
	I0814 09:49:05.835029    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451ede5263eb"
	I0814 09:49:05.847138    4033 logs.go:123] Gathering logs for kube-controller-manager [2b8801fcf8bf] ...
	I0814 09:49:05.847150    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b8801fcf8bf"
	I0814 09:49:05.864816    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:49:05.864827    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:49:05.890452    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:49:05.890461    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:49:05.926893    4033 logs.go:123] Gathering logs for etcd [03512631e6e0] ...
	I0814 09:49:05.926901    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03512631e6e0"
	I0814 09:49:08.442797    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:49:13.445370    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:49:13.445634    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:49:13.475669    4033 logs.go:276] 2 containers: [351b37282b78 8cc17453c508]
	I0814 09:49:13.475784    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:49:13.495314    4033 logs.go:276] 2 containers: [9f25f6d89e86 03512631e6e0]
	I0814 09:49:13.495419    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:49:13.509571    4033 logs.go:276] 1 containers: [b4e8c942b977]
	I0814 09:49:13.509649    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:49:13.521528    4033 logs.go:276] 2 containers: [c4ee65912640 f85fd17e0cb2]
	I0814 09:49:13.521597    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:49:13.532108    4033 logs.go:276] 1 containers: [451ede5263eb]
	I0814 09:49:13.532174    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:49:13.543304    4033 logs.go:276] 2 containers: [2b8801fcf8bf 053e1a0d063c]
	I0814 09:49:13.543370    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:49:13.553873    4033 logs.go:276] 0 containers: []
	W0814 09:49:13.553884    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:49:13.553942    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:49:13.564554    4033 logs.go:276] 2 containers: [262a2b312a14 7c16146209d8]
	I0814 09:49:13.564573    4033 logs.go:123] Gathering logs for etcd [9f25f6d89e86] ...
	I0814 09:49:13.564578    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f25f6d89e86"
	I0814 09:49:13.578276    4033 logs.go:123] Gathering logs for kube-proxy [451ede5263eb] ...
	I0814 09:49:13.578286    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451ede5263eb"
	I0814 09:49:13.589789    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:49:13.589799    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:49:13.626976    4033 logs.go:123] Gathering logs for kube-apiserver [8cc17453c508] ...
	I0814 09:49:13.626984    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc17453c508"
	I0814 09:49:13.652621    4033 logs.go:123] Gathering logs for kube-controller-manager [2b8801fcf8bf] ...
	I0814 09:49:13.652630    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b8801fcf8bf"
	I0814 09:49:13.670179    4033 logs.go:123] Gathering logs for storage-provisioner [262a2b312a14] ...
	I0814 09:49:13.670189    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262a2b312a14"
	I0814 09:49:13.682202    4033 logs.go:123] Gathering logs for storage-provisioner [7c16146209d8] ...
	I0814 09:49:13.682214    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c16146209d8"
	I0814 09:49:13.694565    4033 logs.go:123] Gathering logs for etcd [03512631e6e0] ...
	I0814 09:49:13.694577    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03512631e6e0"
	I0814 09:49:13.709693    4033 logs.go:123] Gathering logs for kube-scheduler [f85fd17e0cb2] ...
	I0814 09:49:13.709707    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f85fd17e0cb2"
	I0814 09:49:13.723735    4033 logs.go:123] Gathering logs for kube-controller-manager [053e1a0d063c] ...
	I0814 09:49:13.723745    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053e1a0d063c"
	I0814 09:49:13.743985    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:49:13.743994    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:49:13.770613    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:49:13.770620    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:49:13.781986    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:49:13.782003    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:49:13.786730    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:49:13.786740    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:49:13.822733    4033 logs.go:123] Gathering logs for kube-apiserver [351b37282b78] ...
	I0814 09:49:13.822746    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351b37282b78"
	I0814 09:49:13.837639    4033 logs.go:123] Gathering logs for coredns [b4e8c942b977] ...
	I0814 09:49:13.837650    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8c942b977"
	I0814 09:49:13.848808    4033 logs.go:123] Gathering logs for kube-scheduler [c4ee65912640] ...
	I0814 09:49:13.848820    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4ee65912640"
	I0814 09:49:16.362859    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:49:21.365097    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:49:21.365246    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:49:21.380049    4033 logs.go:276] 2 containers: [351b37282b78 8cc17453c508]
	I0814 09:49:21.380143    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:49:21.392647    4033 logs.go:276] 2 containers: [9f25f6d89e86 03512631e6e0]
	I0814 09:49:21.392723    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:49:21.403412    4033 logs.go:276] 1 containers: [b4e8c942b977]
	I0814 09:49:21.403480    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:49:21.413611    4033 logs.go:276] 2 containers: [c4ee65912640 f85fd17e0cb2]
	I0814 09:49:21.413670    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:49:21.432310    4033 logs.go:276] 1 containers: [451ede5263eb]
	I0814 09:49:21.432380    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:49:21.443004    4033 logs.go:276] 2 containers: [2b8801fcf8bf 053e1a0d063c]
	I0814 09:49:21.443075    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:49:21.454015    4033 logs.go:276] 0 containers: []
	W0814 09:49:21.454030    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:49:21.454089    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:49:21.464583    4033 logs.go:276] 2 containers: [262a2b312a14 7c16146209d8]
	I0814 09:49:21.464604    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:49:21.464609    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:49:21.490928    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:49:21.490937    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:49:21.502631    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:49:21.502644    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:49:21.540557    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:49:21.540567    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:49:21.575537    4033 logs.go:123] Gathering logs for etcd [9f25f6d89e86] ...
	I0814 09:49:21.575549    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f25f6d89e86"
	I0814 09:49:21.590769    4033 logs.go:123] Gathering logs for kube-scheduler [f85fd17e0cb2] ...
	I0814 09:49:21.590781    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f85fd17e0cb2"
	I0814 09:49:21.604412    4033 logs.go:123] Gathering logs for kube-proxy [451ede5263eb] ...
	I0814 09:49:21.604425    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451ede5263eb"
	I0814 09:49:21.615859    4033 logs.go:123] Gathering logs for kube-controller-manager [2b8801fcf8bf] ...
	I0814 09:49:21.615874    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b8801fcf8bf"
	I0814 09:49:21.634431    4033 logs.go:123] Gathering logs for kube-scheduler [c4ee65912640] ...
	I0814 09:49:21.634440    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4ee65912640"
	I0814 09:49:21.646850    4033 logs.go:123] Gathering logs for kube-controller-manager [053e1a0d063c] ...
	I0814 09:49:21.646864    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053e1a0d063c"
	I0814 09:49:21.658413    4033 logs.go:123] Gathering logs for storage-provisioner [7c16146209d8] ...
	I0814 09:49:21.658425    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c16146209d8"
	I0814 09:49:21.669512    4033 logs.go:123] Gathering logs for kube-apiserver [351b37282b78] ...
	I0814 09:49:21.669523    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351b37282b78"
	I0814 09:49:21.687776    4033 logs.go:123] Gathering logs for etcd [03512631e6e0] ...
	I0814 09:49:21.687786    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03512631e6e0"
	I0814 09:49:21.706472    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:49:21.706483    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:49:21.710929    4033 logs.go:123] Gathering logs for kube-apiserver [8cc17453c508] ...
	I0814 09:49:21.710935    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc17453c508"
	I0814 09:49:21.738055    4033 logs.go:123] Gathering logs for coredns [b4e8c942b977] ...
	I0814 09:49:21.738069    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8c942b977"
	I0814 09:49:21.751016    4033 logs.go:123] Gathering logs for storage-provisioner [262a2b312a14] ...
	I0814 09:49:21.751029    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262a2b312a14"
	I0814 09:49:24.264865    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:49:29.266830    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:49:29.266975    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:49:29.279192    4033 logs.go:276] 2 containers: [351b37282b78 8cc17453c508]
	I0814 09:49:29.279271    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:49:29.291089    4033 logs.go:276] 2 containers: [9f25f6d89e86 03512631e6e0]
	I0814 09:49:29.291158    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:49:29.301520    4033 logs.go:276] 1 containers: [b4e8c942b977]
	I0814 09:49:29.301586    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:49:29.311805    4033 logs.go:276] 2 containers: [c4ee65912640 f85fd17e0cb2]
	I0814 09:49:29.311874    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:49:29.322231    4033 logs.go:276] 1 containers: [451ede5263eb]
	I0814 09:49:29.322285    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:49:29.345997    4033 logs.go:276] 2 containers: [2b8801fcf8bf 053e1a0d063c]
	I0814 09:49:29.346070    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:49:29.358213    4033 logs.go:276] 0 containers: []
	W0814 09:49:29.358225    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:49:29.358285    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:49:29.368653    4033 logs.go:276] 2 containers: [262a2b312a14 7c16146209d8]
	I0814 09:49:29.368670    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:49:29.368675    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:49:29.373748    4033 logs.go:123] Gathering logs for kube-proxy [451ede5263eb] ...
	I0814 09:49:29.373757    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451ede5263eb"
	I0814 09:49:29.385682    4033 logs.go:123] Gathering logs for storage-provisioner [7c16146209d8] ...
	I0814 09:49:29.385694    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c16146209d8"
	I0814 09:49:29.397199    4033 logs.go:123] Gathering logs for etcd [9f25f6d89e86] ...
	I0814 09:49:29.397214    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f25f6d89e86"
	I0814 09:49:29.411566    4033 logs.go:123] Gathering logs for etcd [03512631e6e0] ...
	I0814 09:49:29.411577    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03512631e6e0"
	I0814 09:49:29.426316    4033 logs.go:123] Gathering logs for kube-scheduler [c4ee65912640] ...
	I0814 09:49:29.426326    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4ee65912640"
	I0814 09:49:29.437870    4033 logs.go:123] Gathering logs for kube-scheduler [f85fd17e0cb2] ...
	I0814 09:49:29.437883    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f85fd17e0cb2"
	I0814 09:49:29.452098    4033 logs.go:123] Gathering logs for coredns [b4e8c942b977] ...
	I0814 09:49:29.452110    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8c942b977"
	I0814 09:49:29.463030    4033 logs.go:123] Gathering logs for kube-controller-manager [053e1a0d063c] ...
	I0814 09:49:29.463042    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053e1a0d063c"
	I0814 09:49:29.481837    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:49:29.481849    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:49:29.507242    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:49:29.507251    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:49:29.544065    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:49:29.544074    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:49:29.579771    4033 logs.go:123] Gathering logs for kube-apiserver [351b37282b78] ...
	I0814 09:49:29.579785    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351b37282b78"
	I0814 09:49:29.594567    4033 logs.go:123] Gathering logs for kube-apiserver [8cc17453c508] ...
	I0814 09:49:29.594579    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc17453c508"
	I0814 09:49:29.618304    4033 logs.go:123] Gathering logs for kube-controller-manager [2b8801fcf8bf] ...
	I0814 09:49:29.618318    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b8801fcf8bf"
	I0814 09:49:29.642420    4033 logs.go:123] Gathering logs for storage-provisioner [262a2b312a14] ...
	I0814 09:49:29.642431    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262a2b312a14"
	I0814 09:49:29.654252    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:49:29.654262    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:49:32.168776    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:49:37.170865    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:49:37.170987    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:49:37.182157    4033 logs.go:276] 2 containers: [351b37282b78 8cc17453c508]
	I0814 09:49:37.182229    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:49:37.193315    4033 logs.go:276] 2 containers: [9f25f6d89e86 03512631e6e0]
	I0814 09:49:37.193384    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:49:37.204204    4033 logs.go:276] 1 containers: [b4e8c942b977]
	I0814 09:49:37.204270    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:49:37.215038    4033 logs.go:276] 2 containers: [c4ee65912640 f85fd17e0cb2]
	I0814 09:49:37.215093    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:49:37.226780    4033 logs.go:276] 1 containers: [451ede5263eb]
	I0814 09:49:37.226845    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:49:37.244460    4033 logs.go:276] 2 containers: [2b8801fcf8bf 053e1a0d063c]
	I0814 09:49:37.244521    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:49:37.254315    4033 logs.go:276] 0 containers: []
	W0814 09:49:37.254326    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:49:37.254378    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:49:37.264806    4033 logs.go:276] 2 containers: [262a2b312a14 7c16146209d8]
	I0814 09:49:37.264824    4033 logs.go:123] Gathering logs for kube-apiserver [8cc17453c508] ...
	I0814 09:49:37.264831    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc17453c508"
	I0814 09:49:37.288847    4033 logs.go:123] Gathering logs for etcd [9f25f6d89e86] ...
	I0814 09:49:37.288857    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f25f6d89e86"
	I0814 09:49:37.302897    4033 logs.go:123] Gathering logs for etcd [03512631e6e0] ...
	I0814 09:49:37.302905    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03512631e6e0"
	I0814 09:49:37.317720    4033 logs.go:123] Gathering logs for kube-scheduler [c4ee65912640] ...
	I0814 09:49:37.317733    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4ee65912640"
	I0814 09:49:37.329638    4033 logs.go:123] Gathering logs for kube-scheduler [f85fd17e0cb2] ...
	I0814 09:49:37.329649    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f85fd17e0cb2"
	I0814 09:49:37.343333    4033 logs.go:123] Gathering logs for kube-controller-manager [2b8801fcf8bf] ...
	I0814 09:49:37.343345    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b8801fcf8bf"
	I0814 09:49:37.360548    4033 logs.go:123] Gathering logs for kube-apiserver [351b37282b78] ...
	I0814 09:49:37.360561    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351b37282b78"
	I0814 09:49:37.374979    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:49:37.374989    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:49:37.413358    4033 logs.go:123] Gathering logs for coredns [b4e8c942b977] ...
	I0814 09:49:37.413368    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8c942b977"
	I0814 09:49:37.424334    4033 logs.go:123] Gathering logs for kube-proxy [451ede5263eb] ...
	I0814 09:49:37.424346    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451ede5263eb"
	I0814 09:49:37.436376    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:49:37.436387    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:49:37.462231    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:49:37.462242    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:49:37.500468    4033 logs.go:123] Gathering logs for storage-provisioner [262a2b312a14] ...
	I0814 09:49:37.500481    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262a2b312a14"
	I0814 09:49:37.512176    4033 logs.go:123] Gathering logs for kube-controller-manager [053e1a0d063c] ...
	I0814 09:49:37.512187    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053e1a0d063c"
	I0814 09:49:37.523556    4033 logs.go:123] Gathering logs for storage-provisioner [7c16146209d8] ...
	I0814 09:49:37.523568    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c16146209d8"
	I0814 09:49:37.534990    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:49:37.535006    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:49:37.546665    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:49:37.546678    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:49:40.052879    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:49:45.055583    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:49:45.056046    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:49:45.094259    4033 logs.go:276] 2 containers: [351b37282b78 8cc17453c508]
	I0814 09:49:45.094376    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:49:45.115954    4033 logs.go:276] 2 containers: [9f25f6d89e86 03512631e6e0]
	I0814 09:49:45.116062    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:49:45.132504    4033 logs.go:276] 1 containers: [b4e8c942b977]
	I0814 09:49:45.132587    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:49:45.146864    4033 logs.go:276] 2 containers: [c4ee65912640 f85fd17e0cb2]
	I0814 09:49:45.146938    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:49:45.157760    4033 logs.go:276] 1 containers: [451ede5263eb]
	I0814 09:49:45.157830    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:49:45.169360    4033 logs.go:276] 2 containers: [2b8801fcf8bf 053e1a0d063c]
	I0814 09:49:45.169437    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:49:45.180720    4033 logs.go:276] 0 containers: []
	W0814 09:49:45.180733    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:49:45.180796    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:49:45.192229    4033 logs.go:276] 2 containers: [262a2b312a14 7c16146209d8]
	I0814 09:49:45.192247    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:49:45.192253    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:49:45.197393    4033 logs.go:123] Gathering logs for kube-apiserver [8cc17453c508] ...
	I0814 09:49:45.197401    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc17453c508"
	I0814 09:49:45.221473    4033 logs.go:123] Gathering logs for kube-controller-manager [053e1a0d063c] ...
	I0814 09:49:45.221483    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053e1a0d063c"
	I0814 09:49:45.236662    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:49:45.236672    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:49:45.273967    4033 logs.go:123] Gathering logs for kube-apiserver [351b37282b78] ...
	I0814 09:49:45.273975    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351b37282b78"
	I0814 09:49:45.288336    4033 logs.go:123] Gathering logs for etcd [03512631e6e0] ...
	I0814 09:49:45.288346    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03512631e6e0"
	I0814 09:49:45.308657    4033 logs.go:123] Gathering logs for coredns [b4e8c942b977] ...
	I0814 09:49:45.308668    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8c942b977"
	I0814 09:49:45.320282    4033 logs.go:123] Gathering logs for kube-scheduler [c4ee65912640] ...
	I0814 09:49:45.320294    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4ee65912640"
	I0814 09:49:45.332762    4033 logs.go:123] Gathering logs for kube-scheduler [f85fd17e0cb2] ...
	I0814 09:49:45.332773    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f85fd17e0cb2"
	I0814 09:49:45.347676    4033 logs.go:123] Gathering logs for kube-proxy [451ede5263eb] ...
	I0814 09:49:45.347686    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451ede5263eb"
	I0814 09:49:45.363684    4033 logs.go:123] Gathering logs for kube-controller-manager [2b8801fcf8bf] ...
	I0814 09:49:45.363695    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b8801fcf8bf"
	I0814 09:49:45.381037    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:49:45.381048    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:49:45.406931    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:49:45.406938    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:49:45.419130    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:49:45.419140    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:49:45.457585    4033 logs.go:123] Gathering logs for etcd [9f25f6d89e86] ...
	I0814 09:49:45.457596    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f25f6d89e86"
	I0814 09:49:45.477092    4033 logs.go:123] Gathering logs for storage-provisioner [262a2b312a14] ...
	I0814 09:49:45.477101    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262a2b312a14"
	I0814 09:49:45.489142    4033 logs.go:123] Gathering logs for storage-provisioner [7c16146209d8] ...
	I0814 09:49:45.489152    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c16146209d8"
	I0814 09:49:48.002872    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:49:53.005086    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:49:53.005319    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:49:53.034849    4033 logs.go:276] 2 containers: [351b37282b78 8cc17453c508]
	I0814 09:49:53.034925    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:49:53.049574    4033 logs.go:276] 2 containers: [9f25f6d89e86 03512631e6e0]
	I0814 09:49:53.049648    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:49:53.061523    4033 logs.go:276] 1 containers: [b4e8c942b977]
	I0814 09:49:53.061599    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:49:53.072595    4033 logs.go:276] 2 containers: [c4ee65912640 f85fd17e0cb2]
	I0814 09:49:53.072673    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:49:53.082853    4033 logs.go:276] 1 containers: [451ede5263eb]
	I0814 09:49:53.082920    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:49:53.093072    4033 logs.go:276] 2 containers: [2b8801fcf8bf 053e1a0d063c]
	I0814 09:49:53.093144    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:49:53.103494    4033 logs.go:276] 0 containers: []
	W0814 09:49:53.103507    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:49:53.103568    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:49:53.114040    4033 logs.go:276] 2 containers: [262a2b312a14 7c16146209d8]
	I0814 09:49:53.114059    4033 logs.go:123] Gathering logs for kube-scheduler [f85fd17e0cb2] ...
	I0814 09:49:53.114065    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f85fd17e0cb2"
	I0814 09:49:53.128069    4033 logs.go:123] Gathering logs for kube-controller-manager [2b8801fcf8bf] ...
	I0814 09:49:53.128080    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b8801fcf8bf"
	I0814 09:49:53.145256    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:49:53.145267    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:49:53.157260    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:49:53.157272    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:49:53.193936    4033 logs.go:123] Gathering logs for kube-apiserver [8cc17453c508] ...
	I0814 09:49:53.193951    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc17453c508"
	I0814 09:49:53.218323    4033 logs.go:123] Gathering logs for etcd [9f25f6d89e86] ...
	I0814 09:49:53.218334    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f25f6d89e86"
	I0814 09:49:53.232631    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:49:53.232642    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:49:53.269586    4033 logs.go:123] Gathering logs for coredns [b4e8c942b977] ...
	I0814 09:49:53.269594    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8c942b977"
	I0814 09:49:53.280451    4033 logs.go:123] Gathering logs for storage-provisioner [7c16146209d8] ...
	I0814 09:49:53.280459    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c16146209d8"
	I0814 09:49:53.291712    4033 logs.go:123] Gathering logs for storage-provisioner [262a2b312a14] ...
	I0814 09:49:53.291727    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262a2b312a14"
	I0814 09:49:53.307314    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:49:53.307324    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:49:53.332320    4033 logs.go:123] Gathering logs for etcd [03512631e6e0] ...
	I0814 09:49:53.332327    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03512631e6e0"
	I0814 09:49:53.347045    4033 logs.go:123] Gathering logs for kube-proxy [451ede5263eb] ...
	I0814 09:49:53.347056    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451ede5263eb"
	I0814 09:49:53.358824    4033 logs.go:123] Gathering logs for kube-controller-manager [053e1a0d063c] ...
	I0814 09:49:53.358834    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053e1a0d063c"
	I0814 09:49:53.370335    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:49:53.370345    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:49:53.374776    4033 logs.go:123] Gathering logs for kube-apiserver [351b37282b78] ...
	I0814 09:49:53.374783    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351b37282b78"
	I0814 09:49:53.396747    4033 logs.go:123] Gathering logs for kube-scheduler [c4ee65912640] ...
	I0814 09:49:53.396758    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4ee65912640"
	I0814 09:49:55.910213    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:50:00.912875    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:50:00.913318    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:50:00.958878    4033 logs.go:276] 2 containers: [351b37282b78 8cc17453c508]
	I0814 09:50:00.958985    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:50:00.983721    4033 logs.go:276] 2 containers: [9f25f6d89e86 03512631e6e0]
	I0814 09:50:00.983802    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:50:00.997792    4033 logs.go:276] 1 containers: [b4e8c942b977]
	I0814 09:50:00.997871    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:50:01.019422    4033 logs.go:276] 2 containers: [c4ee65912640 f85fd17e0cb2]
	I0814 09:50:01.019491    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:50:01.032305    4033 logs.go:276] 1 containers: [451ede5263eb]
	I0814 09:50:01.032379    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:50:01.047374    4033 logs.go:276] 2 containers: [2b8801fcf8bf 053e1a0d063c]
	I0814 09:50:01.047441    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:50:01.058055    4033 logs.go:276] 0 containers: []
	W0814 09:50:01.058067    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:50:01.058126    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:50:01.069853    4033 logs.go:276] 2 containers: [262a2b312a14 7c16146209d8]
	I0814 09:50:01.069871    4033 logs.go:123] Gathering logs for kube-controller-manager [2b8801fcf8bf] ...
	I0814 09:50:01.069876    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b8801fcf8bf"
	I0814 09:50:01.087490    4033 logs.go:123] Gathering logs for storage-provisioner [262a2b312a14] ...
	I0814 09:50:01.087500    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262a2b312a14"
	I0814 09:50:01.102998    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:50:01.103011    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:50:01.128321    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:50:01.128336    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:50:01.162727    4033 logs.go:123] Gathering logs for etcd [9f25f6d89e86] ...
	I0814 09:50:01.162738    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f25f6d89e86"
	I0814 09:50:01.177966    4033 logs.go:123] Gathering logs for coredns [b4e8c942b977] ...
	I0814 09:50:01.177979    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8c942b977"
	I0814 09:50:01.190662    4033 logs.go:123] Gathering logs for kube-scheduler [f85fd17e0cb2] ...
	I0814 09:50:01.190674    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f85fd17e0cb2"
	I0814 09:50:01.213021    4033 logs.go:123] Gathering logs for kube-proxy [451ede5263eb] ...
	I0814 09:50:01.213037    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451ede5263eb"
	I0814 09:50:01.226746    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:50:01.226758    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:50:01.266671    4033 logs.go:123] Gathering logs for etcd [03512631e6e0] ...
	I0814 09:50:01.266682    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03512631e6e0"
	I0814 09:50:01.281216    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:50:01.281231    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:50:01.293271    4033 logs.go:123] Gathering logs for kube-apiserver [8cc17453c508] ...
	I0814 09:50:01.293285    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc17453c508"
	I0814 09:50:01.317167    4033 logs.go:123] Gathering logs for kube-controller-manager [053e1a0d063c] ...
	I0814 09:50:01.317177    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053e1a0d063c"
	I0814 09:50:01.328594    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:50:01.328605    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:50:01.333348    4033 logs.go:123] Gathering logs for kube-apiserver [351b37282b78] ...
	I0814 09:50:01.333357    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351b37282b78"
	I0814 09:50:01.347461    4033 logs.go:123] Gathering logs for kube-scheduler [c4ee65912640] ...
	I0814 09:50:01.347472    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4ee65912640"
	I0814 09:50:01.358672    4033 logs.go:123] Gathering logs for storage-provisioner [7c16146209d8] ...
	I0814 09:50:01.358682    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c16146209d8"
	I0814 09:50:03.872968    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:50:08.875179    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:50:08.875608    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:50:08.912568    4033 logs.go:276] 2 containers: [351b37282b78 8cc17453c508]
	I0814 09:50:08.912711    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:50:08.937125    4033 logs.go:276] 2 containers: [9f25f6d89e86 03512631e6e0]
	I0814 09:50:08.937220    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:50:08.951709    4033 logs.go:276] 1 containers: [b4e8c942b977]
	I0814 09:50:08.951788    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:50:08.963917    4033 logs.go:276] 2 containers: [c4ee65912640 f85fd17e0cb2]
	I0814 09:50:08.963990    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:50:08.974140    4033 logs.go:276] 1 containers: [451ede5263eb]
	I0814 09:50:08.974220    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:50:08.984628    4033 logs.go:276] 2 containers: [2b8801fcf8bf 053e1a0d063c]
	I0814 09:50:08.984701    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:50:08.995164    4033 logs.go:276] 0 containers: []
	W0814 09:50:08.995175    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:50:08.995237    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:50:09.006156    4033 logs.go:276] 2 containers: [262a2b312a14 7c16146209d8]
	I0814 09:50:09.006174    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:50:09.006179    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:50:09.010534    4033 logs.go:123] Gathering logs for kube-scheduler [f85fd17e0cb2] ...
	I0814 09:50:09.010543    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f85fd17e0cb2"
	I0814 09:50:09.024723    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:50:09.024733    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:50:09.062917    4033 logs.go:123] Gathering logs for kube-scheduler [c4ee65912640] ...
	I0814 09:50:09.062934    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4ee65912640"
	I0814 09:50:09.074756    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:50:09.074770    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:50:09.087250    4033 logs.go:123] Gathering logs for kube-apiserver [351b37282b78] ...
	I0814 09:50:09.087266    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351b37282b78"
	I0814 09:50:09.102608    4033 logs.go:123] Gathering logs for etcd [9f25f6d89e86] ...
	I0814 09:50:09.102618    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f25f6d89e86"
	I0814 09:50:09.116761    4033 logs.go:123] Gathering logs for etcd [03512631e6e0] ...
	I0814 09:50:09.116777    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03512631e6e0"
	I0814 09:50:09.133300    4033 logs.go:123] Gathering logs for coredns [b4e8c942b977] ...
	I0814 09:50:09.133314    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8c942b977"
	I0814 09:50:09.144812    4033 logs.go:123] Gathering logs for kube-proxy [451ede5263eb] ...
	I0814 09:50:09.144824    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451ede5263eb"
	I0814 09:50:09.156288    4033 logs.go:123] Gathering logs for kube-controller-manager [2b8801fcf8bf] ...
	I0814 09:50:09.156300    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b8801fcf8bf"
	I0814 09:50:09.173392    4033 logs.go:123] Gathering logs for kube-controller-manager [053e1a0d063c] ...
	I0814 09:50:09.173401    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053e1a0d063c"
	I0814 09:50:09.186355    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:50:09.186365    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:50:09.221746    4033 logs.go:123] Gathering logs for kube-apiserver [8cc17453c508] ...
	I0814 09:50:09.221758    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc17453c508"
	I0814 09:50:09.246996    4033 logs.go:123] Gathering logs for storage-provisioner [262a2b312a14] ...
	I0814 09:50:09.247009    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262a2b312a14"
	I0814 09:50:09.259202    4033 logs.go:123] Gathering logs for storage-provisioner [7c16146209d8] ...
	I0814 09:50:09.259217    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c16146209d8"
	I0814 09:50:09.270692    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:50:09.270704    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:50:11.797506    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:50:16.799722    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:50:16.799958    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:50:16.821650    4033 logs.go:276] 2 containers: [351b37282b78 8cc17453c508]
	I0814 09:50:16.821745    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:50:16.840217    4033 logs.go:276] 2 containers: [9f25f6d89e86 03512631e6e0]
	I0814 09:50:16.840286    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:50:16.852466    4033 logs.go:276] 1 containers: [b4e8c942b977]
	I0814 09:50:16.852541    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:50:16.869307    4033 logs.go:276] 2 containers: [c4ee65912640 f85fd17e0cb2]
	I0814 09:50:16.869379    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:50:16.884044    4033 logs.go:276] 1 containers: [451ede5263eb]
	I0814 09:50:16.884112    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:50:16.894885    4033 logs.go:276] 2 containers: [2b8801fcf8bf 053e1a0d063c]
	I0814 09:50:16.894947    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:50:16.904876    4033 logs.go:276] 0 containers: []
	W0814 09:50:16.904888    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:50:16.904950    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:50:16.915435    4033 logs.go:276] 2 containers: [262a2b312a14 7c16146209d8]
	I0814 09:50:16.915455    4033 logs.go:123] Gathering logs for storage-provisioner [262a2b312a14] ...
	I0814 09:50:16.915461    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262a2b312a14"
	I0814 09:50:16.927045    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:50:16.927056    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:50:16.939318    4033 logs.go:123] Gathering logs for etcd [9f25f6d89e86] ...
	I0814 09:50:16.939328    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f25f6d89e86"
	I0814 09:50:16.953095    4033 logs.go:123] Gathering logs for coredns [b4e8c942b977] ...
	I0814 09:50:16.953106    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8c942b977"
	I0814 09:50:16.963825    4033 logs.go:123] Gathering logs for kube-scheduler [c4ee65912640] ...
	I0814 09:50:16.963837    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4ee65912640"
	I0814 09:50:16.975360    4033 logs.go:123] Gathering logs for kube-proxy [451ede5263eb] ...
	I0814 09:50:16.975370    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451ede5263eb"
	I0814 09:50:16.986843    4033 logs.go:123] Gathering logs for kube-scheduler [f85fd17e0cb2] ...
	I0814 09:50:16.986853    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f85fd17e0cb2"
	I0814 09:50:17.000636    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:50:17.000646    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:50:17.025345    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:50:17.025352    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:50:17.062777    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:50:17.062785    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:50:17.067204    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:50:17.067211    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:50:17.102333    4033 logs.go:123] Gathering logs for kube-apiserver [8cc17453c508] ...
	I0814 09:50:17.102343    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc17453c508"
	I0814 09:50:17.125738    4033 logs.go:123] Gathering logs for kube-apiserver [351b37282b78] ...
	I0814 09:50:17.125749    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351b37282b78"
	I0814 09:50:17.141396    4033 logs.go:123] Gathering logs for kube-controller-manager [2b8801fcf8bf] ...
	I0814 09:50:17.141407    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b8801fcf8bf"
	I0814 09:50:17.160999    4033 logs.go:123] Gathering logs for kube-controller-manager [053e1a0d063c] ...
	I0814 09:50:17.161011    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053e1a0d063c"
	I0814 09:50:17.173270    4033 logs.go:123] Gathering logs for etcd [03512631e6e0] ...
	I0814 09:50:17.173281    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03512631e6e0"
	I0814 09:50:17.188289    4033 logs.go:123] Gathering logs for storage-provisioner [7c16146209d8] ...
	I0814 09:50:17.188298    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c16146209d8"
	I0814 09:50:19.701709    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:50:24.703835    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:50:24.703954    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:50:24.714859    4033 logs.go:276] 2 containers: [351b37282b78 8cc17453c508]
	I0814 09:50:24.714930    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:50:24.725623    4033 logs.go:276] 2 containers: [9f25f6d89e86 03512631e6e0]
	I0814 09:50:24.725685    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:50:24.735961    4033 logs.go:276] 1 containers: [b4e8c942b977]
	I0814 09:50:24.736037    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:50:24.746742    4033 logs.go:276] 2 containers: [c4ee65912640 f85fd17e0cb2]
	I0814 09:50:24.746803    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:50:24.757151    4033 logs.go:276] 1 containers: [451ede5263eb]
	I0814 09:50:24.757210    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:50:24.771443    4033 logs.go:276] 2 containers: [2b8801fcf8bf 053e1a0d063c]
	I0814 09:50:24.771521    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:50:24.782020    4033 logs.go:276] 0 containers: []
	W0814 09:50:24.782031    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:50:24.782089    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:50:24.794035    4033 logs.go:276] 2 containers: [262a2b312a14 7c16146209d8]
	I0814 09:50:24.794055    4033 logs.go:123] Gathering logs for storage-provisioner [7c16146209d8] ...
	I0814 09:50:24.794060    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c16146209d8"
	I0814 09:50:24.805044    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:50:24.805056    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:50:24.809265    4033 logs.go:123] Gathering logs for kube-apiserver [351b37282b78] ...
	I0814 09:50:24.809272    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351b37282b78"
	I0814 09:50:24.823521    4033 logs.go:123] Gathering logs for etcd [03512631e6e0] ...
	I0814 09:50:24.823530    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03512631e6e0"
	I0814 09:50:24.837790    4033 logs.go:123] Gathering logs for kube-scheduler [f85fd17e0cb2] ...
	I0814 09:50:24.837802    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f85fd17e0cb2"
	I0814 09:50:24.851705    4033 logs.go:123] Gathering logs for kube-proxy [451ede5263eb] ...
	I0814 09:50:24.851715    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451ede5263eb"
	I0814 09:50:24.863648    4033 logs.go:123] Gathering logs for storage-provisioner [262a2b312a14] ...
	I0814 09:50:24.863663    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262a2b312a14"
	I0814 09:50:24.875446    4033 logs.go:123] Gathering logs for kube-apiserver [8cc17453c508] ...
	I0814 09:50:24.875456    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc17453c508"
	I0814 09:50:24.899166    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:50:24.899178    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:50:24.922338    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:50:24.922346    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:50:24.958575    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:50:24.958583    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:50:24.994481    4033 logs.go:123] Gathering logs for kube-scheduler [c4ee65912640] ...
	I0814 09:50:24.994490    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4ee65912640"
	I0814 09:50:25.005731    4033 logs.go:123] Gathering logs for kube-controller-manager [2b8801fcf8bf] ...
	I0814 09:50:25.005743    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b8801fcf8bf"
	I0814 09:50:25.027177    4033 logs.go:123] Gathering logs for kube-controller-manager [053e1a0d063c] ...
	I0814 09:50:25.027190    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053e1a0d063c"
	I0814 09:50:25.038572    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:50:25.038587    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:50:25.050703    4033 logs.go:123] Gathering logs for etcd [9f25f6d89e86] ...
	I0814 09:50:25.050713    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f25f6d89e86"
	I0814 09:50:25.064633    4033 logs.go:123] Gathering logs for coredns [b4e8c942b977] ...
	I0814 09:50:25.064643    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8c942b977"
	I0814 09:50:27.578246    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:50:32.580409    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:50:32.580621    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:50:32.600557    4033 logs.go:276] 2 containers: [351b37282b78 8cc17453c508]
	I0814 09:50:32.600647    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:50:32.615198    4033 logs.go:276] 2 containers: [9f25f6d89e86 03512631e6e0]
	I0814 09:50:32.615277    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:50:32.628383    4033 logs.go:276] 1 containers: [b4e8c942b977]
	I0814 09:50:32.628454    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:50:32.638984    4033 logs.go:276] 2 containers: [c4ee65912640 f85fd17e0cb2]
	I0814 09:50:32.639042    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:50:32.649553    4033 logs.go:276] 1 containers: [451ede5263eb]
	I0814 09:50:32.649624    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:50:32.660598    4033 logs.go:276] 2 containers: [2b8801fcf8bf 053e1a0d063c]
	I0814 09:50:32.660656    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:50:32.670696    4033 logs.go:276] 0 containers: []
	W0814 09:50:32.670709    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:50:32.670770    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:50:32.681893    4033 logs.go:276] 2 containers: [262a2b312a14 7c16146209d8]
	I0814 09:50:32.681912    4033 logs.go:123] Gathering logs for kube-apiserver [8cc17453c508] ...
	I0814 09:50:32.681918    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc17453c508"
	I0814 09:50:32.707055    4033 logs.go:123] Gathering logs for coredns [b4e8c942b977] ...
	I0814 09:50:32.707067    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8c942b977"
	I0814 09:50:32.718163    4033 logs.go:123] Gathering logs for kube-controller-manager [2b8801fcf8bf] ...
	I0814 09:50:32.718176    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b8801fcf8bf"
	I0814 09:50:32.736547    4033 logs.go:123] Gathering logs for kube-apiserver [351b37282b78] ...
	I0814 09:50:32.736558    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351b37282b78"
	I0814 09:50:32.750646    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:50:32.750656    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:50:32.786051    4033 logs.go:123] Gathering logs for etcd [03512631e6e0] ...
	I0814 09:50:32.786066    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03512631e6e0"
	I0814 09:50:32.804895    4033 logs.go:123] Gathering logs for kube-scheduler [f85fd17e0cb2] ...
	I0814 09:50:32.804906    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f85fd17e0cb2"
	I0814 09:50:32.819035    4033 logs.go:123] Gathering logs for kube-proxy [451ede5263eb] ...
	I0814 09:50:32.819046    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451ede5263eb"
	I0814 09:50:32.830620    4033 logs.go:123] Gathering logs for kube-controller-manager [053e1a0d063c] ...
	I0814 09:50:32.830630    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053e1a0d063c"
	I0814 09:50:32.846245    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:50:32.846256    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:50:32.850877    4033 logs.go:123] Gathering logs for etcd [9f25f6d89e86] ...
	I0814 09:50:32.850884    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f25f6d89e86"
	I0814 09:50:32.864955    4033 logs.go:123] Gathering logs for storage-provisioner [262a2b312a14] ...
	I0814 09:50:32.864965    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262a2b312a14"
	I0814 09:50:32.876440    4033 logs.go:123] Gathering logs for storage-provisioner [7c16146209d8] ...
	I0814 09:50:32.876451    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c16146209d8"
	I0814 09:50:32.888927    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:50:32.888939    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:50:32.913197    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:50:32.913205    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:50:32.951072    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:50:32.951086    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:50:32.963445    4033 logs.go:123] Gathering logs for kube-scheduler [c4ee65912640] ...
	I0814 09:50:32.963456    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4ee65912640"
	I0814 09:50:35.477370    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:50:40.479410    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:50:40.479525    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:50:40.492259    4033 logs.go:276] 2 containers: [351b37282b78 8cc17453c508]
	I0814 09:50:40.492339    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:50:40.503538    4033 logs.go:276] 2 containers: [9f25f6d89e86 03512631e6e0]
	I0814 09:50:40.503615    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:50:40.514379    4033 logs.go:276] 1 containers: [b4e8c942b977]
	I0814 09:50:40.514446    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:50:40.525182    4033 logs.go:276] 2 containers: [c4ee65912640 f85fd17e0cb2]
	I0814 09:50:40.525252    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:50:40.535379    4033 logs.go:276] 1 containers: [451ede5263eb]
	I0814 09:50:40.535447    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:50:40.546044    4033 logs.go:276] 2 containers: [2b8801fcf8bf 053e1a0d063c]
	I0814 09:50:40.546126    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:50:40.556221    4033 logs.go:276] 0 containers: []
	W0814 09:50:40.556230    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:50:40.556284    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:50:40.570721    4033 logs.go:276] 2 containers: [262a2b312a14 7c16146209d8]
	I0814 09:50:40.570740    4033 logs.go:123] Gathering logs for kube-apiserver [351b37282b78] ...
	I0814 09:50:40.570745    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351b37282b78"
	I0814 09:50:40.584997    4033 logs.go:123] Gathering logs for coredns [b4e8c942b977] ...
	I0814 09:50:40.585007    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8c942b977"
	I0814 09:50:40.596587    4033 logs.go:123] Gathering logs for kube-scheduler [c4ee65912640] ...
	I0814 09:50:40.596598    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4ee65912640"
	I0814 09:50:40.609573    4033 logs.go:123] Gathering logs for storage-provisioner [7c16146209d8] ...
	I0814 09:50:40.609583    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c16146209d8"
	I0814 09:50:40.625957    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:50:40.625968    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:50:40.649935    4033 logs.go:123] Gathering logs for kube-proxy [451ede5263eb] ...
	I0814 09:50:40.649942    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451ede5263eb"
	I0814 09:50:40.665553    4033 logs.go:123] Gathering logs for kube-controller-manager [053e1a0d063c] ...
	I0814 09:50:40.665562    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053e1a0d063c"
	I0814 09:50:40.676575    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:50:40.676594    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:50:40.713568    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:50:40.713579    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:50:40.718193    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:50:40.718199    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:50:40.752116    4033 logs.go:123] Gathering logs for kube-apiserver [8cc17453c508] ...
	I0814 09:50:40.752131    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc17453c508"
	I0814 09:50:40.776464    4033 logs.go:123] Gathering logs for etcd [03512631e6e0] ...
	I0814 09:50:40.776474    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03512631e6e0"
	I0814 09:50:40.791062    4033 logs.go:123] Gathering logs for kube-controller-manager [2b8801fcf8bf] ...
	I0814 09:50:40.791073    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b8801fcf8bf"
	I0814 09:50:40.808703    4033 logs.go:123] Gathering logs for etcd [9f25f6d89e86] ...
	I0814 09:50:40.808712    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f25f6d89e86"
	I0814 09:50:40.822153    4033 logs.go:123] Gathering logs for kube-scheduler [f85fd17e0cb2] ...
	I0814 09:50:40.822162    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f85fd17e0cb2"
	I0814 09:50:40.836507    4033 logs.go:123] Gathering logs for storage-provisioner [262a2b312a14] ...
	I0814 09:50:40.836522    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262a2b312a14"
	I0814 09:50:40.848545    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:50:40.848559    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:50:43.365019    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:50:48.367493    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:50:48.367714    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:50:48.387383    4033 logs.go:276] 2 containers: [351b37282b78 8cc17453c508]
	I0814 09:50:48.387478    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:50:48.402680    4033 logs.go:276] 2 containers: [9f25f6d89e86 03512631e6e0]
	I0814 09:50:48.402760    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:50:48.417081    4033 logs.go:276] 1 containers: [b4e8c942b977]
	I0814 09:50:48.417157    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:50:48.427407    4033 logs.go:276] 2 containers: [c4ee65912640 f85fd17e0cb2]
	I0814 09:50:48.427473    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:50:48.438354    4033 logs.go:276] 1 containers: [451ede5263eb]
	I0814 09:50:48.438427    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:50:48.449785    4033 logs.go:276] 2 containers: [2b8801fcf8bf 053e1a0d063c]
	I0814 09:50:48.449858    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:50:48.460355    4033 logs.go:276] 0 containers: []
	W0814 09:50:48.460374    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:50:48.460437    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:50:48.470657    4033 logs.go:276] 2 containers: [262a2b312a14 7c16146209d8]
	I0814 09:50:48.470679    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:50:48.470685    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:50:48.475305    4033 logs.go:123] Gathering logs for kube-apiserver [351b37282b78] ...
	I0814 09:50:48.475311    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351b37282b78"
	I0814 09:50:48.497567    4033 logs.go:123] Gathering logs for kube-scheduler [c4ee65912640] ...
	I0814 09:50:48.497577    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4ee65912640"
	I0814 09:50:48.509731    4033 logs.go:123] Gathering logs for kube-controller-manager [053e1a0d063c] ...
	I0814 09:50:48.509747    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053e1a0d063c"
	I0814 09:50:48.524109    4033 logs.go:123] Gathering logs for kube-scheduler [f85fd17e0cb2] ...
	I0814 09:50:48.524123    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f85fd17e0cb2"
	I0814 09:50:48.543241    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:50:48.543256    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:50:48.557020    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:50:48.557031    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:50:48.593713    4033 logs.go:123] Gathering logs for etcd [9f25f6d89e86] ...
	I0814 09:50:48.593721    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f25f6d89e86"
	I0814 09:50:48.608207    4033 logs.go:123] Gathering logs for storage-provisioner [262a2b312a14] ...
	I0814 09:50:48.608217    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262a2b312a14"
	I0814 09:50:48.619939    4033 logs.go:123] Gathering logs for storage-provisioner [7c16146209d8] ...
	I0814 09:50:48.619948    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c16146209d8"
	I0814 09:50:48.631521    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:50:48.631533    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:50:48.656135    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:50:48.656143    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:50:48.690899    4033 logs.go:123] Gathering logs for kube-apiserver [8cc17453c508] ...
	I0814 09:50:48.690912    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc17453c508"
	I0814 09:50:48.714310    4033 logs.go:123] Gathering logs for etcd [03512631e6e0] ...
	I0814 09:50:48.714324    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03512631e6e0"
	I0814 09:50:48.728881    4033 logs.go:123] Gathering logs for coredns [b4e8c942b977] ...
	I0814 09:50:48.728891    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8c942b977"
	I0814 09:50:48.744987    4033 logs.go:123] Gathering logs for kube-proxy [451ede5263eb] ...
	I0814 09:50:48.745002    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451ede5263eb"
	I0814 09:50:48.756411    4033 logs.go:123] Gathering logs for kube-controller-manager [2b8801fcf8bf] ...
	I0814 09:50:48.756425    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b8801fcf8bf"
	I0814 09:50:51.274136    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:50:56.276354    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:50:56.276549    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:50:56.289242    4033 logs.go:276] 2 containers: [351b37282b78 8cc17453c508]
	I0814 09:50:56.289320    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:50:56.299801    4033 logs.go:276] 2 containers: [9f25f6d89e86 03512631e6e0]
	I0814 09:50:56.299876    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:50:56.310208    4033 logs.go:276] 1 containers: [b4e8c942b977]
	I0814 09:50:56.310303    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:50:56.320657    4033 logs.go:276] 2 containers: [c4ee65912640 f85fd17e0cb2]
	I0814 09:50:56.320726    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:50:56.334138    4033 logs.go:276] 1 containers: [451ede5263eb]
	I0814 09:50:56.334217    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:50:56.346425    4033 logs.go:276] 2 containers: [2b8801fcf8bf 053e1a0d063c]
	I0814 09:50:56.346489    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:50:56.356655    4033 logs.go:276] 0 containers: []
	W0814 09:50:56.356665    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:50:56.356726    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:50:56.367482    4033 logs.go:276] 2 containers: [262a2b312a14 7c16146209d8]
	I0814 09:50:56.367500    4033 logs.go:123] Gathering logs for etcd [03512631e6e0] ...
	I0814 09:50:56.367506    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03512631e6e0"
	I0814 09:50:56.382347    4033 logs.go:123] Gathering logs for kube-controller-manager [2b8801fcf8bf] ...
	I0814 09:50:56.382360    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b8801fcf8bf"
	I0814 09:50:56.399600    4033 logs.go:123] Gathering logs for kube-controller-manager [053e1a0d063c] ...
	I0814 09:50:56.399611    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053e1a0d063c"
	I0814 09:50:56.421097    4033 logs.go:123] Gathering logs for storage-provisioner [262a2b312a14] ...
	I0814 09:50:56.421108    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262a2b312a14"
	I0814 09:50:56.432438    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:50:56.432452    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:50:56.471623    4033 logs.go:123] Gathering logs for kube-apiserver [8cc17453c508] ...
	I0814 09:50:56.471638    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc17453c508"
	I0814 09:50:56.498817    4033 logs.go:123] Gathering logs for etcd [9f25f6d89e86] ...
	I0814 09:50:56.498831    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f25f6d89e86"
	I0814 09:50:56.512862    4033 logs.go:123] Gathering logs for kube-scheduler [c4ee65912640] ...
	I0814 09:50:56.512874    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4ee65912640"
	I0814 09:50:56.524342    4033 logs.go:123] Gathering logs for kube-apiserver [351b37282b78] ...
	I0814 09:50:56.524352    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351b37282b78"
	I0814 09:50:56.537853    4033 logs.go:123] Gathering logs for kube-scheduler [f85fd17e0cb2] ...
	I0814 09:50:56.537865    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f85fd17e0cb2"
	I0814 09:50:56.551580    4033 logs.go:123] Gathering logs for kube-proxy [451ede5263eb] ...
	I0814 09:50:56.551594    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451ede5263eb"
	I0814 09:50:56.563112    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:50:56.563123    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:50:56.574818    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:50:56.574832    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:50:56.579277    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:50:56.579284    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:50:56.612818    4033 logs.go:123] Gathering logs for coredns [b4e8c942b977] ...
	I0814 09:50:56.612833    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8c942b977"
	I0814 09:50:56.624452    4033 logs.go:123] Gathering logs for storage-provisioner [7c16146209d8] ...
	I0814 09:50:56.624463    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c16146209d8"
	I0814 09:50:56.636416    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:50:56.636427    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:50:59.161002    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:51:04.163370    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:51:04.163574    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:51:04.177972    4033 logs.go:276] 2 containers: [351b37282b78 8cc17453c508]
	I0814 09:51:04.178051    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:51:04.190087    4033 logs.go:276] 2 containers: [9f25f6d89e86 03512631e6e0]
	I0814 09:51:04.190164    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:51:04.201609    4033 logs.go:276] 1 containers: [b4e8c942b977]
	I0814 09:51:04.201678    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:51:04.212280    4033 logs.go:276] 2 containers: [c4ee65912640 f85fd17e0cb2]
	I0814 09:51:04.212359    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:51:04.238804    4033 logs.go:276] 1 containers: [451ede5263eb]
	I0814 09:51:04.238889    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:51:04.256209    4033 logs.go:276] 2 containers: [2b8801fcf8bf 053e1a0d063c]
	I0814 09:51:04.256289    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:51:04.269666    4033 logs.go:276] 0 containers: []
	W0814 09:51:04.269677    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:51:04.269736    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:51:04.280376    4033 logs.go:276] 2 containers: [262a2b312a14 7c16146209d8]
	I0814 09:51:04.280394    4033 logs.go:123] Gathering logs for storage-provisioner [262a2b312a14] ...
	I0814 09:51:04.280400    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262a2b312a14"
	I0814 09:51:04.291698    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:51:04.291709    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:51:04.315388    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:51:04.315398    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:51:04.319614    4033 logs.go:123] Gathering logs for etcd [03512631e6e0] ...
	I0814 09:51:04.319624    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03512631e6e0"
	I0814 09:51:04.334317    4033 logs.go:123] Gathering logs for kube-scheduler [f85fd17e0cb2] ...
	I0814 09:51:04.334329    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f85fd17e0cb2"
	I0814 09:51:04.348231    4033 logs.go:123] Gathering logs for kube-proxy [451ede5263eb] ...
	I0814 09:51:04.348238    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451ede5263eb"
	I0814 09:51:04.360628    4033 logs.go:123] Gathering logs for storage-provisioner [7c16146209d8] ...
	I0814 09:51:04.360644    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c16146209d8"
	I0814 09:51:04.372462    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:51:04.372476    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:51:04.410786    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:51:04.410799    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:51:04.445526    4033 logs.go:123] Gathering logs for kube-apiserver [8cc17453c508] ...
	I0814 09:51:04.445540    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc17453c508"
	I0814 09:51:04.469452    4033 logs.go:123] Gathering logs for etcd [9f25f6d89e86] ...
	I0814 09:51:04.469464    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f25f6d89e86"
	I0814 09:51:04.483055    4033 logs.go:123] Gathering logs for coredns [b4e8c942b977] ...
	I0814 09:51:04.483068    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8c942b977"
	I0814 09:51:04.495450    4033 logs.go:123] Gathering logs for kube-controller-manager [2b8801fcf8bf] ...
	I0814 09:51:04.495462    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b8801fcf8bf"
	I0814 09:51:04.512073    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:51:04.512085    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:51:04.530013    4033 logs.go:123] Gathering logs for kube-apiserver [351b37282b78] ...
	I0814 09:51:04.530024    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351b37282b78"
	I0814 09:51:04.545689    4033 logs.go:123] Gathering logs for kube-scheduler [c4ee65912640] ...
	I0814 09:51:04.545701    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4ee65912640"
	I0814 09:51:04.557655    4033 logs.go:123] Gathering logs for kube-controller-manager [053e1a0d063c] ...
	I0814 09:51:04.557666    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053e1a0d063c"
	I0814 09:51:07.071046    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:51:12.073603    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:51:12.074020    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:51:12.107079    4033 logs.go:276] 2 containers: [351b37282b78 8cc17453c508]
	I0814 09:51:12.107207    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:51:12.128174    4033 logs.go:276] 2 containers: [9f25f6d89e86 03512631e6e0]
	I0814 09:51:12.128273    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:51:12.143576    4033 logs.go:276] 1 containers: [b4e8c942b977]
	I0814 09:51:12.143658    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:51:12.161749    4033 logs.go:276] 2 containers: [c4ee65912640 f85fd17e0cb2]
	I0814 09:51:12.161828    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:51:12.174781    4033 logs.go:276] 1 containers: [451ede5263eb]
	I0814 09:51:12.174853    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:51:12.185684    4033 logs.go:276] 2 containers: [2b8801fcf8bf 053e1a0d063c]
	I0814 09:51:12.185758    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:51:12.196639    4033 logs.go:276] 0 containers: []
	W0814 09:51:12.196652    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:51:12.196714    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:51:12.207105    4033 logs.go:276] 2 containers: [262a2b312a14 7c16146209d8]
	I0814 09:51:12.207127    4033 logs.go:123] Gathering logs for kube-apiserver [351b37282b78] ...
	I0814 09:51:12.207133    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351b37282b78"
	I0814 09:51:12.221337    4033 logs.go:123] Gathering logs for etcd [03512631e6e0] ...
	I0814 09:51:12.221347    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03512631e6e0"
	I0814 09:51:12.235509    4033 logs.go:123] Gathering logs for kube-scheduler [f85fd17e0cb2] ...
	I0814 09:51:12.235519    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f85fd17e0cb2"
	I0814 09:51:12.249697    4033 logs.go:123] Gathering logs for kube-controller-manager [2b8801fcf8bf] ...
	I0814 09:51:12.249708    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b8801fcf8bf"
	I0814 09:51:12.266530    4033 logs.go:123] Gathering logs for storage-provisioner [262a2b312a14] ...
	I0814 09:51:12.266540    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262a2b312a14"
	I0814 09:51:12.278123    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:51:12.278134    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:51:12.316962    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:51:12.316970    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:51:12.352120    4033 logs.go:123] Gathering logs for kube-apiserver [8cc17453c508] ...
	I0814 09:51:12.352130    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc17453c508"
	I0814 09:51:12.376188    4033 logs.go:123] Gathering logs for etcd [9f25f6d89e86] ...
	I0814 09:51:12.376200    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f25f6d89e86"
	I0814 09:51:12.394234    4033 logs.go:123] Gathering logs for kube-scheduler [c4ee65912640] ...
	I0814 09:51:12.394245    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4ee65912640"
	I0814 09:51:12.406180    4033 logs.go:123] Gathering logs for kube-proxy [451ede5263eb] ...
	I0814 09:51:12.406191    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451ede5263eb"
	I0814 09:51:12.422608    4033 logs.go:123] Gathering logs for storage-provisioner [7c16146209d8] ...
	I0814 09:51:12.422618    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c16146209d8"
	I0814 09:51:12.433698    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:51:12.433709    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:51:12.438671    4033 logs.go:123] Gathering logs for coredns [b4e8c942b977] ...
	I0814 09:51:12.438676    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8c942b977"
	I0814 09:51:12.450096    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:51:12.450108    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:51:12.474132    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:51:12.474140    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:51:12.487777    4033 logs.go:123] Gathering logs for kube-controller-manager [053e1a0d063c] ...
	I0814 09:51:12.487788    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053e1a0d063c"
	I0814 09:51:15.001837    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:51:20.002990    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:51:20.003022    4033 kubeadm.go:597] duration metric: took 4m4.397822792s to restartPrimaryControlPlane
	W0814 09:51:20.003052    4033 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0814 09:51:20.003067    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0814 09:51:21.061708    4033 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.058676959s)
	I0814 09:51:21.061793    4033 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 09:51:21.068216    4033 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 09:51:21.071063    4033 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 09:51:21.074407    4033 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 09:51:21.074412    4033 kubeadm.go:157] found existing configuration files:
	
	I0814 09:51:21.074439    4033 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50343 /etc/kubernetes/admin.conf
	I0814 09:51:21.077531    4033 kubeadm.go:163] "https://control-plane.minikube.internal:50343" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50343 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 09:51:21.077554    4033 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 09:51:21.080104    4033 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50343 /etc/kubernetes/kubelet.conf
	I0814 09:51:21.082721    4033 kubeadm.go:163] "https://control-plane.minikube.internal:50343" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50343 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 09:51:21.082748    4033 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 09:51:21.085760    4033 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50343 /etc/kubernetes/controller-manager.conf
	I0814 09:51:21.088404    4033 kubeadm.go:163] "https://control-plane.minikube.internal:50343" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50343 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 09:51:21.088426    4033 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 09:51:21.090840    4033 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50343 /etc/kubernetes/scheduler.conf
	I0814 09:51:21.093684    4033 kubeadm.go:163] "https://control-plane.minikube.internal:50343" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50343 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 09:51:21.093707    4033 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 09:51:21.096538    4033 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 09:51:21.114365    4033 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0814 09:51:21.114393    4033 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 09:51:21.164031    4033 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 09:51:21.164088    4033 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 09:51:21.164156    4033 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0814 09:51:21.213693    4033 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 09:51:21.221851    4033 out.go:204]   - Generating certificates and keys ...
	I0814 09:51:21.221887    4033 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 09:51:21.221916    4033 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 09:51:21.221960    4033 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 09:51:21.221995    4033 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 09:51:21.222036    4033 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 09:51:21.222083    4033 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 09:51:21.222121    4033 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 09:51:21.222154    4033 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 09:51:21.222187    4033 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 09:51:21.222224    4033 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 09:51:21.222242    4033 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 09:51:21.222277    4033 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 09:51:21.462741    4033 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 09:51:21.646629    4033 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 09:51:21.783046    4033 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 09:51:21.958814    4033 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 09:51:21.988432    4033 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 09:51:21.988816    4033 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 09:51:21.988838    4033 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 09:51:22.075930    4033 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 09:51:22.079138    4033 out.go:204]   - Booting up control plane ...
	I0814 09:51:22.079188    4033 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 09:51:22.079229    4033 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 09:51:22.079261    4033 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 09:51:22.079305    4033 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 09:51:22.079427    4033 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0814 09:51:26.078878    4033 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.001782 seconds
	I0814 09:51:26.078935    4033 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0814 09:51:26.083779    4033 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0814 09:51:26.593196    4033 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0814 09:51:26.593349    4033 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-579000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0814 09:51:27.096748    4033 kubeadm.go:310] [bootstrap-token] Using token: jgb9at.yikhjz5w53wfghkv
	I0814 09:51:27.105140    4033 out.go:204]   - Configuring RBAC rules ...
	I0814 09:51:27.105200    4033 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0814 09:51:27.105246    4033 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0814 09:51:27.106043    4033 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0814 09:51:27.107069    4033 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0814 09:51:27.107968    4033 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0814 09:51:27.108795    4033 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0814 09:51:27.112013    4033 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0814 09:51:27.280116    4033 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0814 09:51:27.499967    4033 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0814 09:51:27.500354    4033 kubeadm.go:310] 
	I0814 09:51:27.500381    4033 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0814 09:51:27.500384    4033 kubeadm.go:310] 
	I0814 09:51:27.500421    4033 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0814 09:51:27.500424    4033 kubeadm.go:310] 
	I0814 09:51:27.500457    4033 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0814 09:51:27.500494    4033 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0814 09:51:27.500524    4033 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0814 09:51:27.500556    4033 kubeadm.go:310] 
	I0814 09:51:27.500639    4033 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0814 09:51:27.500643    4033 kubeadm.go:310] 
	I0814 09:51:27.500666    4033 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0814 09:51:27.500675    4033 kubeadm.go:310] 
	I0814 09:51:27.500701    4033 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0814 09:51:27.500761    4033 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0814 09:51:27.500828    4033 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0814 09:51:27.500833    4033 kubeadm.go:310] 
	I0814 09:51:27.500878    4033 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0814 09:51:27.500954    4033 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0814 09:51:27.500960    4033 kubeadm.go:310] 
	I0814 09:51:27.501018    4033 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jgb9at.yikhjz5w53wfghkv \
	I0814 09:51:27.501121    4033 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6bc1bdbbe167ab66a20d6bf1c306e986530a9d0fee84c418f91e1b4312d4e260 \
	I0814 09:51:27.501133    4033 kubeadm.go:310] 	--control-plane 
	I0814 09:51:27.501136    4033 kubeadm.go:310] 
	I0814 09:51:27.501245    4033 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0814 09:51:27.501251    4033 kubeadm.go:310] 
	I0814 09:51:27.501290    4033 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jgb9at.yikhjz5w53wfghkv \
	I0814 09:51:27.501345    4033 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6bc1bdbbe167ab66a20d6bf1c306e986530a9d0fee84c418f91e1b4312d4e260 
	I0814 09:51:27.501483    4033 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 09:51:27.501493    4033 cni.go:84] Creating CNI manager for ""
	I0814 09:51:27.501502    4033 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0814 09:51:27.505704    4033 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 09:51:27.511692    4033 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 09:51:27.514643    4033 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 09:51:27.519956    4033 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 09:51:27.520051    4033 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-579000 minikube.k8s.io/updated_at=2024_08_14T09_51_27_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35 minikube.k8s.io/name=running-upgrade-579000 minikube.k8s.io/primary=true
	I0814 09:51:27.520090    4033 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:51:27.566773    4033 ops.go:34] apiserver oom_adj: -16
	I0814 09:51:27.566860    4033 kubeadm.go:1113] duration metric: took 46.857375ms to wait for elevateKubeSystemPrivileges
	I0814 09:51:27.566873    4033 kubeadm.go:394] duration metric: took 4m11.9787485s to StartCluster
	I0814 09:51:27.566882    4033 settings.go:142] acquiring lock: {Name:mk45b0aba98bc9a80a7cc9e2d664f69dcf74de9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:51:27.566957    4033 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19446-1067/kubeconfig
	I0814 09:51:27.567352    4033 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19446-1067/kubeconfig: {Name:mkd5271b15535f495ab8e34d870e7dbcadc9c40a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:51:27.567567    4033 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0814 09:51:27.567577    4033 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0814 09:51:27.567617    4033 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-579000"
	I0814 09:51:27.567632    4033 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-579000"
	W0814 09:51:27.567635    4033 addons.go:243] addon storage-provisioner should already be in state true
	I0814 09:51:27.567646    4033 host.go:66] Checking if "running-upgrade-579000" exists ...
	I0814 09:51:27.567645    4033 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-579000"
	I0814 09:51:27.567662    4033 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-579000"
	I0814 09:51:27.567646    4033 config.go:182] Loaded profile config "running-upgrade-579000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0814 09:51:27.568588    4033 kapi.go:59] client config for running-upgrade-579000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/running-upgrade-579000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/running-upgrade-579000/client.key", CAFile:"/Users/jenkins/minikube-integration/19446-1067/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10605fe30), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0814 09:51:27.568752    4033 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-579000"
	W0814 09:51:27.568761    4033 addons.go:243] addon default-storageclass should already be in state true
	I0814 09:51:27.568769    4033 host.go:66] Checking if "running-upgrade-579000" exists ...
	I0814 09:51:27.571697    4033 out.go:177] * Verifying Kubernetes components...
	I0814 09:51:27.572055    4033 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 09:51:27.575742    4033 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 09:51:27.575748    4033 sshutil.go:53] new ssh client: &{IP:localhost Port:50274 SSHKeyPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/running-upgrade-579000/id_rsa Username:docker}
	I0814 09:51:27.579599    4033 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 09:51:27.583671    4033 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 09:51:27.589655    4033 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 09:51:27.589662    4033 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 09:51:27.589668    4033 sshutil.go:53] new ssh client: &{IP:localhost Port:50274 SSHKeyPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/running-upgrade-579000/id_rsa Username:docker}
	I0814 09:51:27.679170    4033 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 09:51:27.685428    4033 api_server.go:52] waiting for apiserver process to appear ...
	I0814 09:51:27.685470    4033 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:51:27.689311    4033 api_server.go:72] duration metric: took 121.73775ms to wait for apiserver process to appear ...
	I0814 09:51:27.689318    4033 api_server.go:88] waiting for apiserver healthz status ...
	I0814 09:51:27.689325    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:51:27.718669    4033 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 09:51:27.742350    4033 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 09:51:28.066946    4033 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0814 09:51:28.066958    4033 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0814 09:51:32.691168    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:51:32.691190    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:51:37.691185    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:51:37.691207    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:51:42.691285    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:51:42.691331    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:51:47.691535    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:51:47.691582    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:51:52.691920    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:51:52.691947    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:51:57.692447    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:51:57.692483    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0814 09:51:58.067002    4033 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0814 09:51:58.071267    4033 out.go:177] * Enabled addons: storage-provisioner
	I0814 09:51:58.079267    4033 addons.go:510] duration metric: took 30.513024458s for enable addons: enabled=[storage-provisioner]
	I0814 09:52:02.693186    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:52:02.693218    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:52:07.694122    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:52:07.694174    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:52:12.694283    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:52:12.694350    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:52:17.695655    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:52:17.695679    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:52:22.697323    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:52:22.697369    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:52:27.699388    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:52:27.699500    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:52:27.710830    4033 logs.go:276] 1 containers: [741deb167866]
	I0814 09:52:27.710903    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:52:27.721359    4033 logs.go:276] 1 containers: [925172ee97cb]
	I0814 09:52:27.721426    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:52:27.732558    4033 logs.go:276] 2 containers: [ba7f625babd1 178c997768be]
	I0814 09:52:27.732628    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:52:27.742880    4033 logs.go:276] 1 containers: [2f06dc386c61]
	I0814 09:52:27.742947    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:52:27.753549    4033 logs.go:276] 1 containers: [4ffe967a65fd]
	I0814 09:52:27.753617    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:52:27.764020    4033 logs.go:276] 1 containers: [2d9d0cc2c886]
	I0814 09:52:27.764095    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:52:27.775056    4033 logs.go:276] 0 containers: []
	W0814 09:52:27.775070    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:52:27.775129    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:52:27.787424    4033 logs.go:276] 1 containers: [cd7351187c71]
	I0814 09:52:27.787440    4033 logs.go:123] Gathering logs for kube-controller-manager [2d9d0cc2c886] ...
	I0814 09:52:27.787446    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9d0cc2c886"
	I0814 09:52:27.805679    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:52:27.805690    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:52:27.830437    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:52:27.830453    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:52:27.834895    4033 logs.go:123] Gathering logs for kube-scheduler [2f06dc386c61] ...
	I0814 09:52:27.834902    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f06dc386c61"
	I0814 09:52:27.850479    4033 logs.go:123] Gathering logs for kube-proxy [4ffe967a65fd] ...
	I0814 09:52:27.850490    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ffe967a65fd"
	I0814 09:52:27.870927    4033 logs.go:123] Gathering logs for etcd [925172ee97cb] ...
	I0814 09:52:27.870943    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925172ee97cb"
	I0814 09:52:27.885506    4033 logs.go:123] Gathering logs for coredns [ba7f625babd1] ...
	I0814 09:52:27.885515    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba7f625babd1"
	I0814 09:52:27.897611    4033 logs.go:123] Gathering logs for coredns [178c997768be] ...
	I0814 09:52:27.897621    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 178c997768be"
	I0814 09:52:27.909501    4033 logs.go:123] Gathering logs for storage-provisioner [cd7351187c71] ...
	I0814 09:52:27.909517    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7351187c71"
	I0814 09:52:27.920954    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:52:27.920964    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:52:27.932553    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:52:27.932562    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:52:27.968072    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:52:27.968080    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:52:28.039866    4033 logs.go:123] Gathering logs for kube-apiserver [741deb167866] ...
	I0814 09:52:28.039877    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 741deb167866"
	I0814 09:52:30.556177    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:52:35.558365    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:52:35.558525    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:52:35.570793    4033 logs.go:276] 1 containers: [741deb167866]
	I0814 09:52:35.570869    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:52:35.585371    4033 logs.go:276] 1 containers: [925172ee97cb]
	I0814 09:52:35.585448    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:52:35.599254    4033 logs.go:276] 2 containers: [ba7f625babd1 178c997768be]
	I0814 09:52:35.599327    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:52:35.609343    4033 logs.go:276] 1 containers: [2f06dc386c61]
	I0814 09:52:35.609411    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:52:35.620431    4033 logs.go:276] 1 containers: [4ffe967a65fd]
	I0814 09:52:35.620501    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:52:35.631002    4033 logs.go:276] 1 containers: [2d9d0cc2c886]
	I0814 09:52:35.631070    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:52:35.641008    4033 logs.go:276] 0 containers: []
	W0814 09:52:35.641019    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:52:35.641074    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:52:35.652420    4033 logs.go:276] 1 containers: [cd7351187c71]
	I0814 09:52:35.652436    4033 logs.go:123] Gathering logs for coredns [ba7f625babd1] ...
	I0814 09:52:35.652442    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba7f625babd1"
	I0814 09:52:35.664545    4033 logs.go:123] Gathering logs for coredns [178c997768be] ...
	I0814 09:52:35.664556    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 178c997768be"
	I0814 09:52:35.677843    4033 logs.go:123] Gathering logs for kube-scheduler [2f06dc386c61] ...
	I0814 09:52:35.677854    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f06dc386c61"
	I0814 09:52:35.692391    4033 logs.go:123] Gathering logs for kube-controller-manager [2d9d0cc2c886] ...
	I0814 09:52:35.692402    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9d0cc2c886"
	I0814 09:52:35.710519    4033 logs.go:123] Gathering logs for storage-provisioner [cd7351187c71] ...
	I0814 09:52:35.710529    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7351187c71"
	I0814 09:52:35.722083    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:52:35.722096    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:52:35.745221    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:52:35.745230    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:52:35.749591    4033 logs.go:123] Gathering logs for etcd [925172ee97cb] ...
	I0814 09:52:35.749601    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925172ee97cb"
	I0814 09:52:35.763446    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:52:35.763459    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:52:35.776289    4033 logs.go:123] Gathering logs for kube-apiserver [741deb167866] ...
	I0814 09:52:35.776301    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 741deb167866"
	I0814 09:52:35.790956    4033 logs.go:123] Gathering logs for kube-proxy [4ffe967a65fd] ...
	I0814 09:52:35.790967    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ffe967a65fd"
	I0814 09:52:35.803243    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:52:35.803253    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:52:35.838459    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:52:35.838470    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:52:38.377273    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:52:43.379237    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:52:43.379375    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:52:43.392911    4033 logs.go:276] 1 containers: [741deb167866]
	I0814 09:52:43.392985    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:52:43.404058    4033 logs.go:276] 1 containers: [925172ee97cb]
	I0814 09:52:43.404124    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:52:43.414650    4033 logs.go:276] 2 containers: [ba7f625babd1 178c997768be]
	I0814 09:52:43.414719    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:52:43.429210    4033 logs.go:276] 1 containers: [2f06dc386c61]
	I0814 09:52:43.429274    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:52:43.439045    4033 logs.go:276] 1 containers: [4ffe967a65fd]
	I0814 09:52:43.439107    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:52:43.449382    4033 logs.go:276] 1 containers: [2d9d0cc2c886]
	I0814 09:52:43.449449    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:52:43.459759    4033 logs.go:276] 0 containers: []
	W0814 09:52:43.459770    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:52:43.459818    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:52:43.470105    4033 logs.go:276] 1 containers: [cd7351187c71]
	I0814 09:52:43.470119    4033 logs.go:123] Gathering logs for etcd [925172ee97cb] ...
	I0814 09:52:43.470124    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925172ee97cb"
	I0814 09:52:43.483994    4033 logs.go:123] Gathering logs for coredns [ba7f625babd1] ...
	I0814 09:52:43.484009    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba7f625babd1"
	I0814 09:52:43.495969    4033 logs.go:123] Gathering logs for coredns [178c997768be] ...
	I0814 09:52:43.495981    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 178c997768be"
	I0814 09:52:43.508032    4033 logs.go:123] Gathering logs for kube-scheduler [2f06dc386c61] ...
	I0814 09:52:43.508041    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f06dc386c61"
	I0814 09:52:43.522900    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:52:43.522910    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:52:43.547982    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:52:43.547991    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:52:43.559177    4033 logs.go:123] Gathering logs for storage-provisioner [cd7351187c71] ...
	I0814 09:52:43.559188    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7351187c71"
	I0814 09:52:43.571002    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:52:43.571013    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:52:43.604404    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:52:43.604412    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:52:43.609121    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:52:43.609127    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:52:43.643850    4033 logs.go:123] Gathering logs for kube-apiserver [741deb167866] ...
	I0814 09:52:43.643862    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 741deb167866"
	I0814 09:52:43.658525    4033 logs.go:123] Gathering logs for kube-proxy [4ffe967a65fd] ...
	I0814 09:52:43.658536    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ffe967a65fd"
	I0814 09:52:43.674148    4033 logs.go:123] Gathering logs for kube-controller-manager [2d9d0cc2c886] ...
	I0814 09:52:43.674161    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9d0cc2c886"
	I0814 09:52:46.193905    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:52:51.195602    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:52:51.195810    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:52:51.213418    4033 logs.go:276] 1 containers: [741deb167866]
	I0814 09:52:51.213502    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:52:51.226537    4033 logs.go:276] 1 containers: [925172ee97cb]
	I0814 09:52:51.226617    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:52:51.242357    4033 logs.go:276] 2 containers: [ba7f625babd1 178c997768be]
	I0814 09:52:51.242429    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:52:51.252599    4033 logs.go:276] 1 containers: [2f06dc386c61]
	I0814 09:52:51.252671    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:52:51.263102    4033 logs.go:276] 1 containers: [4ffe967a65fd]
	I0814 09:52:51.263168    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:52:51.273252    4033 logs.go:276] 1 containers: [2d9d0cc2c886]
	I0814 09:52:51.273321    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:52:51.283273    4033 logs.go:276] 0 containers: []
	W0814 09:52:51.283286    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:52:51.283349    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:52:51.294321    4033 logs.go:276] 1 containers: [cd7351187c71]
	I0814 09:52:51.294335    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:52:51.294340    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:52:51.298855    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:52:51.298863    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:52:51.336178    4033 logs.go:123] Gathering logs for coredns [ba7f625babd1] ...
	I0814 09:52:51.336188    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba7f625babd1"
	I0814 09:52:51.348505    4033 logs.go:123] Gathering logs for coredns [178c997768be] ...
	I0814 09:52:51.348517    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 178c997768be"
	I0814 09:52:51.362363    4033 logs.go:123] Gathering logs for kube-proxy [4ffe967a65fd] ...
	I0814 09:52:51.362378    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ffe967a65fd"
	I0814 09:52:51.375065    4033 logs.go:123] Gathering logs for kube-controller-manager [2d9d0cc2c886] ...
	I0814 09:52:51.375075    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9d0cc2c886"
	I0814 09:52:51.393470    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:52:51.393481    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:52:51.405159    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:52:51.405171    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:52:51.439427    4033 logs.go:123] Gathering logs for kube-apiserver [741deb167866] ...
	I0814 09:52:51.439437    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 741deb167866"
	I0814 09:52:51.454152    4033 logs.go:123] Gathering logs for etcd [925172ee97cb] ...
	I0814 09:52:51.454164    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925172ee97cb"
	I0814 09:52:51.468625    4033 logs.go:123] Gathering logs for kube-scheduler [2f06dc386c61] ...
	I0814 09:52:51.468634    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f06dc386c61"
	I0814 09:52:51.483338    4033 logs.go:123] Gathering logs for storage-provisioner [cd7351187c71] ...
	I0814 09:52:51.483348    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7351187c71"
	I0814 09:52:51.494995    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:52:51.495006    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:52:54.020128    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:52:59.022226    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:52:59.022409    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:52:59.036970    4033 logs.go:276] 1 containers: [741deb167866]
	I0814 09:52:59.037058    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:52:59.049138    4033 logs.go:276] 1 containers: [925172ee97cb]
	I0814 09:52:59.049214    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:52:59.059730    4033 logs.go:276] 2 containers: [ba7f625babd1 178c997768be]
	I0814 09:52:59.059798    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:52:59.070502    4033 logs.go:276] 1 containers: [2f06dc386c61]
	I0814 09:52:59.070574    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:52:59.080758    4033 logs.go:276] 1 containers: [4ffe967a65fd]
	I0814 09:52:59.080834    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:52:59.090958    4033 logs.go:276] 1 containers: [2d9d0cc2c886]
	I0814 09:52:59.091021    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:52:59.101443    4033 logs.go:276] 0 containers: []
	W0814 09:52:59.101454    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:52:59.101518    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:52:59.111589    4033 logs.go:276] 1 containers: [cd7351187c71]
	I0814 09:52:59.111604    4033 logs.go:123] Gathering logs for coredns [ba7f625babd1] ...
	I0814 09:52:59.111609    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba7f625babd1"
	I0814 09:52:59.123837    4033 logs.go:123] Gathering logs for coredns [178c997768be] ...
	I0814 09:52:59.123847    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 178c997768be"
	I0814 09:52:59.135485    4033 logs.go:123] Gathering logs for kube-scheduler [2f06dc386c61] ...
	I0814 09:52:59.135496    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f06dc386c61"
	I0814 09:52:59.150981    4033 logs.go:123] Gathering logs for kube-proxy [4ffe967a65fd] ...
	I0814 09:52:59.150991    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ffe967a65fd"
	I0814 09:52:59.168241    4033 logs.go:123] Gathering logs for storage-provisioner [cd7351187c71] ...
	I0814 09:52:59.168252    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7351187c71"
	I0814 09:52:59.180358    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:52:59.180369    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:52:59.203828    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:52:59.203837    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:52:59.215040    4033 logs.go:123] Gathering logs for kube-apiserver [741deb167866] ...
	I0814 09:52:59.215052    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 741deb167866"
	I0814 09:52:59.229172    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:52:59.229182    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:52:59.233715    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:52:59.233721    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:52:59.270501    4033 logs.go:123] Gathering logs for etcd [925172ee97cb] ...
	I0814 09:52:59.270511    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925172ee97cb"
	I0814 09:52:59.289244    4033 logs.go:123] Gathering logs for kube-controller-manager [2d9d0cc2c886] ...
	I0814 09:52:59.289256    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9d0cc2c886"
	I0814 09:52:59.306714    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:52:59.306726    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:53:01.844852    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:53:06.846889    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:53:06.847084    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:53:06.866762    4033 logs.go:276] 1 containers: [741deb167866]
	I0814 09:53:06.866865    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:53:06.881689    4033 logs.go:276] 1 containers: [925172ee97cb]
	I0814 09:53:06.881764    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:53:06.893883    4033 logs.go:276] 2 containers: [ba7f625babd1 178c997768be]
	I0814 09:53:06.893951    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:53:06.905061    4033 logs.go:276] 1 containers: [2f06dc386c61]
	I0814 09:53:06.905133    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:53:06.916077    4033 logs.go:276] 1 containers: [4ffe967a65fd]
	I0814 09:53:06.916146    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:53:06.926818    4033 logs.go:276] 1 containers: [2d9d0cc2c886]
	I0814 09:53:06.926888    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:53:06.937311    4033 logs.go:276] 0 containers: []
	W0814 09:53:06.937321    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:53:06.937379    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:53:06.948394    4033 logs.go:276] 1 containers: [cd7351187c71]
	I0814 09:53:06.948409    4033 logs.go:123] Gathering logs for coredns [ba7f625babd1] ...
	I0814 09:53:06.948414    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba7f625babd1"
	I0814 09:53:06.960898    4033 logs.go:123] Gathering logs for kube-controller-manager [2d9d0cc2c886] ...
	I0814 09:53:06.960909    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9d0cc2c886"
	I0814 09:53:06.978751    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:53:06.978762    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:53:07.004180    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:53:07.004188    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:53:07.015445    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:53:07.015459    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:53:07.049497    4033 logs.go:123] Gathering logs for kube-apiserver [741deb167866] ...
	I0814 09:53:07.049510    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 741deb167866"
	I0814 09:53:07.063900    4033 logs.go:123] Gathering logs for etcd [925172ee97cb] ...
	I0814 09:53:07.063911    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925172ee97cb"
	I0814 09:53:07.077950    4033 logs.go:123] Gathering logs for coredns [178c997768be] ...
	I0814 09:53:07.077962    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 178c997768be"
	I0814 09:53:07.089815    4033 logs.go:123] Gathering logs for kube-scheduler [2f06dc386c61] ...
	I0814 09:53:07.089824    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f06dc386c61"
	I0814 09:53:07.105144    4033 logs.go:123] Gathering logs for kube-proxy [4ffe967a65fd] ...
	I0814 09:53:07.105157    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ffe967a65fd"
	I0814 09:53:07.122305    4033 logs.go:123] Gathering logs for storage-provisioner [cd7351187c71] ...
	I0814 09:53:07.122318    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7351187c71"
	I0814 09:53:07.133885    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:53:07.133895    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:53:07.168273    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:53:07.168281    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:53:09.674676    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:53:14.676758    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:53:14.676982    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:53:14.704992    4033 logs.go:276] 1 containers: [741deb167866]
	I0814 09:53:14.705143    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:53:14.722370    4033 logs.go:276] 1 containers: [925172ee97cb]
	I0814 09:53:14.722451    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:53:14.739083    4033 logs.go:276] 2 containers: [ba7f625babd1 178c997768be]
	I0814 09:53:14.739148    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:53:14.750072    4033 logs.go:276] 1 containers: [2f06dc386c61]
	I0814 09:53:14.750138    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:53:14.760728    4033 logs.go:276] 1 containers: [4ffe967a65fd]
	I0814 09:53:14.760793    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:53:14.771499    4033 logs.go:276] 1 containers: [2d9d0cc2c886]
	I0814 09:53:14.771564    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:53:14.781711    4033 logs.go:276] 0 containers: []
	W0814 09:53:14.781722    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:53:14.781775    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:53:14.795548    4033 logs.go:276] 1 containers: [cd7351187c71]
	I0814 09:53:14.795564    4033 logs.go:123] Gathering logs for storage-provisioner [cd7351187c71] ...
	I0814 09:53:14.795570    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7351187c71"
	I0814 09:53:14.810836    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:53:14.810850    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:53:14.815633    4033 logs.go:123] Gathering logs for kube-apiserver [741deb167866] ...
	I0814 09:53:14.815639    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 741deb167866"
	I0814 09:53:14.831910    4033 logs.go:123] Gathering logs for etcd [925172ee97cb] ...
	I0814 09:53:14.831921    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925172ee97cb"
	I0814 09:53:14.846195    4033 logs.go:123] Gathering logs for coredns [ba7f625babd1] ...
	I0814 09:53:14.846205    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba7f625babd1"
	I0814 09:53:14.859831    4033 logs.go:123] Gathering logs for coredns [178c997768be] ...
	I0814 09:53:14.859845    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 178c997768be"
	I0814 09:53:14.871998    4033 logs.go:123] Gathering logs for kube-proxy [4ffe967a65fd] ...
	I0814 09:53:14.872009    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ffe967a65fd"
	I0814 09:53:14.884280    4033 logs.go:123] Gathering logs for kube-controller-manager [2d9d0cc2c886] ...
	I0814 09:53:14.884290    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9d0cc2c886"
	I0814 09:53:14.901295    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:53:14.901305    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:53:14.925003    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:53:14.925012    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:53:14.959319    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:53:14.959327    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:53:14.999495    4033 logs.go:123] Gathering logs for kube-scheduler [2f06dc386c61] ...
	I0814 09:53:14.999506    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f06dc386c61"
	I0814 09:53:15.014563    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:53:15.014572    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:53:17.528775    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:53:22.530847    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:53:22.531003    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:53:22.545429    4033 logs.go:276] 1 containers: [741deb167866]
	I0814 09:53:22.545509    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:53:22.556436    4033 logs.go:276] 1 containers: [925172ee97cb]
	I0814 09:53:22.556510    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:53:22.568791    4033 logs.go:276] 2 containers: [ba7f625babd1 178c997768be]
	I0814 09:53:22.568865    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:53:22.579518    4033 logs.go:276] 1 containers: [2f06dc386c61]
	I0814 09:53:22.579583    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:53:22.590096    4033 logs.go:276] 1 containers: [4ffe967a65fd]
	I0814 09:53:22.590169    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:53:22.600304    4033 logs.go:276] 1 containers: [2d9d0cc2c886]
	I0814 09:53:22.600374    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:53:22.610975    4033 logs.go:276] 0 containers: []
	W0814 09:53:22.610987    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:53:22.611044    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:53:22.621792    4033 logs.go:276] 1 containers: [cd7351187c71]
	I0814 09:53:22.621808    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:53:22.621814    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:53:22.657490    4033 logs.go:123] Gathering logs for etcd [925172ee97cb] ...
	I0814 09:53:22.657499    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925172ee97cb"
	I0814 09:53:22.671813    4033 logs.go:123] Gathering logs for coredns [ba7f625babd1] ...
	I0814 09:53:22.671823    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba7f625babd1"
	I0814 09:53:22.686704    4033 logs.go:123] Gathering logs for coredns [178c997768be] ...
	I0814 09:53:22.686714    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 178c997768be"
	I0814 09:53:22.698557    4033 logs.go:123] Gathering logs for kube-scheduler [2f06dc386c61] ...
	I0814 09:53:22.698568    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f06dc386c61"
	I0814 09:53:22.714211    4033 logs.go:123] Gathering logs for kube-proxy [4ffe967a65fd] ...
	I0814 09:53:22.714224    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ffe967a65fd"
	I0814 09:53:22.725733    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:53:22.725744    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:53:22.730715    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:53:22.730723    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:53:22.765945    4033 logs.go:123] Gathering logs for kube-apiserver [741deb167866] ...
	I0814 09:53:22.765956    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 741deb167866"
	I0814 09:53:22.783473    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:53:22.783482    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:53:22.797466    4033 logs.go:123] Gathering logs for kube-controller-manager [2d9d0cc2c886] ...
	I0814 09:53:22.797480    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9d0cc2c886"
	I0814 09:53:22.815578    4033 logs.go:123] Gathering logs for storage-provisioner [cd7351187c71] ...
	I0814 09:53:22.815591    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7351187c71"
	I0814 09:53:22.834197    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:53:22.834208    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:53:25.360862    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:53:30.362877    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:53:30.363078    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:53:30.378860    4033 logs.go:276] 1 containers: [741deb167866]
	I0814 09:53:30.378935    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:53:30.391234    4033 logs.go:276] 1 containers: [925172ee97cb]
	I0814 09:53:30.391305    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:53:30.402321    4033 logs.go:276] 2 containers: [ba7f625babd1 178c997768be]
	I0814 09:53:30.402393    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:53:30.413405    4033 logs.go:276] 1 containers: [2f06dc386c61]
	I0814 09:53:30.413472    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:53:30.423620    4033 logs.go:276] 1 containers: [4ffe967a65fd]
	I0814 09:53:30.423685    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:53:30.433946    4033 logs.go:276] 1 containers: [2d9d0cc2c886]
	I0814 09:53:30.434012    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:53:30.444728    4033 logs.go:276] 0 containers: []
	W0814 09:53:30.444737    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:53:30.444790    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:53:30.455385    4033 logs.go:276] 1 containers: [cd7351187c71]
	I0814 09:53:30.455399    4033 logs.go:123] Gathering logs for kube-proxy [4ffe967a65fd] ...
	I0814 09:53:30.455405    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ffe967a65fd"
	I0814 09:53:30.467084    4033 logs.go:123] Gathering logs for kube-controller-manager [2d9d0cc2c886] ...
	I0814 09:53:30.467097    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9d0cc2c886"
	I0814 09:53:30.484549    4033 logs.go:123] Gathering logs for storage-provisioner [cd7351187c71] ...
	I0814 09:53:30.484570    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7351187c71"
	I0814 09:53:30.496488    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:53:30.496501    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:53:30.521044    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:53:30.521056    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:53:30.560298    4033 logs.go:123] Gathering logs for kube-apiserver [741deb167866] ...
	I0814 09:53:30.560308    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 741deb167866"
	I0814 09:53:30.578976    4033 logs.go:123] Gathering logs for kube-scheduler [2f06dc386c61] ...
	I0814 09:53:30.578988    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f06dc386c61"
	I0814 09:53:30.594137    4033 logs.go:123] Gathering logs for coredns [ba7f625babd1] ...
	I0814 09:53:30.594146    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba7f625babd1"
	I0814 09:53:30.605872    4033 logs.go:123] Gathering logs for coredns [178c997768be] ...
	I0814 09:53:30.605883    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 178c997768be"
	I0814 09:53:30.617106    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:53:30.617116    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:53:30.628981    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:53:30.628990    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:53:30.665161    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:53:30.665169    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:53:30.669618    4033 logs.go:123] Gathering logs for etcd [925172ee97cb] ...
	I0814 09:53:30.669625    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925172ee97cb"
	I0814 09:53:33.183616    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:53:38.185617    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:53:38.185778    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:53:38.198075    4033 logs.go:276] 1 containers: [741deb167866]
	I0814 09:53:38.198156    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:53:38.209425    4033 logs.go:276] 1 containers: [925172ee97cb]
	I0814 09:53:38.209496    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:53:38.220020    4033 logs.go:276] 2 containers: [ba7f625babd1 178c997768be]
	I0814 09:53:38.220091    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:53:38.230572    4033 logs.go:276] 1 containers: [2f06dc386c61]
	I0814 09:53:38.230639    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:53:38.241382    4033 logs.go:276] 1 containers: [4ffe967a65fd]
	I0814 09:53:38.241451    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:53:38.251803    4033 logs.go:276] 1 containers: [2d9d0cc2c886]
	I0814 09:53:38.251872    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:53:38.261772    4033 logs.go:276] 0 containers: []
	W0814 09:53:38.261790    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:53:38.261847    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:53:38.272537    4033 logs.go:276] 1 containers: [cd7351187c71]
	I0814 09:53:38.272552    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:53:38.272561    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:53:38.297508    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:53:38.297519    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:53:38.309620    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:53:38.309631    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:53:38.345094    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:53:38.345106    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:53:38.380567    4033 logs.go:123] Gathering logs for etcd [925172ee97cb] ...
	I0814 09:53:38.380580    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925172ee97cb"
	I0814 09:53:38.394719    4033 logs.go:123] Gathering logs for coredns [ba7f625babd1] ...
	I0814 09:53:38.394730    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba7f625babd1"
	I0814 09:53:38.406553    4033 logs.go:123] Gathering logs for storage-provisioner [cd7351187c71] ...
	I0814 09:53:38.406566    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7351187c71"
	I0814 09:53:38.418842    4033 logs.go:123] Gathering logs for kube-controller-manager [2d9d0cc2c886] ...
	I0814 09:53:38.418854    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9d0cc2c886"
	I0814 09:53:38.436780    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:53:38.436789    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:53:38.441297    4033 logs.go:123] Gathering logs for kube-apiserver [741deb167866] ...
	I0814 09:53:38.441306    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 741deb167866"
	I0814 09:53:38.456612    4033 logs.go:123] Gathering logs for coredns [178c997768be] ...
	I0814 09:53:38.456622    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 178c997768be"
	I0814 09:53:38.473117    4033 logs.go:123] Gathering logs for kube-scheduler [2f06dc386c61] ...
	I0814 09:53:38.473127    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f06dc386c61"
	I0814 09:53:38.487891    4033 logs.go:123] Gathering logs for kube-proxy [4ffe967a65fd] ...
	I0814 09:53:38.487903    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ffe967a65fd"
	I0814 09:53:41.001942    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:53:46.004143    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:53:46.004304    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:53:46.018703    4033 logs.go:276] 1 containers: [741deb167866]
	I0814 09:53:46.018784    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:53:46.029973    4033 logs.go:276] 1 containers: [925172ee97cb]
	I0814 09:53:46.030047    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:53:46.041318    4033 logs.go:276] 4 containers: [4e6f837b0ac6 b287389b4ec9 ba7f625babd1 178c997768be]
	I0814 09:53:46.041394    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:53:46.060212    4033 logs.go:276] 1 containers: [2f06dc386c61]
	I0814 09:53:46.060280    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:53:46.071068    4033 logs.go:276] 1 containers: [4ffe967a65fd]
	I0814 09:53:46.071139    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:53:46.089534    4033 logs.go:276] 1 containers: [2d9d0cc2c886]
	I0814 09:53:46.089599    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:53:46.100646    4033 logs.go:276] 0 containers: []
	W0814 09:53:46.100655    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:53:46.100704    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:53:46.111472    4033 logs.go:276] 1 containers: [cd7351187c71]
	I0814 09:53:46.111489    4033 logs.go:123] Gathering logs for storage-provisioner [cd7351187c71] ...
	I0814 09:53:46.111495    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7351187c71"
	I0814 09:53:46.123253    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:53:46.123264    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:53:46.158041    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:53:46.158048    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:53:46.162740    4033 logs.go:123] Gathering logs for kube-apiserver [741deb167866] ...
	I0814 09:53:46.162746    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 741deb167866"
	I0814 09:53:46.178668    4033 logs.go:123] Gathering logs for etcd [925172ee97cb] ...
	I0814 09:53:46.178678    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925172ee97cb"
	I0814 09:53:46.193125    4033 logs.go:123] Gathering logs for kube-scheduler [2f06dc386c61] ...
	I0814 09:53:46.193137    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f06dc386c61"
	I0814 09:53:46.212289    4033 logs.go:123] Gathering logs for kube-proxy [4ffe967a65fd] ...
	I0814 09:53:46.212299    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ffe967a65fd"
	I0814 09:53:46.224324    4033 logs.go:123] Gathering logs for kube-controller-manager [2d9d0cc2c886] ...
	I0814 09:53:46.224336    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9d0cc2c886"
	I0814 09:53:46.241424    4033 logs.go:123] Gathering logs for coredns [b287389b4ec9] ...
	I0814 09:53:46.241433    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b287389b4ec9"
	I0814 09:53:46.252957    4033 logs.go:123] Gathering logs for coredns [ba7f625babd1] ...
	I0814 09:53:46.252969    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba7f625babd1"
	I0814 09:53:46.264704    4033 logs.go:123] Gathering logs for coredns [178c997768be] ...
	I0814 09:53:46.264714    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 178c997768be"
	I0814 09:53:46.275919    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:53:46.275931    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:53:46.287499    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:53:46.287509    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:53:46.323580    4033 logs.go:123] Gathering logs for coredns [4e6f837b0ac6] ...
	I0814 09:53:46.323593    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e6f837b0ac6"
	I0814 09:53:46.335081    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:53:46.335092    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:53:48.860520    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:53:53.862669    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:53:53.862801    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:53:53.876011    4033 logs.go:276] 1 containers: [741deb167866]
	I0814 09:53:53.876092    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:53:53.887355    4033 logs.go:276] 1 containers: [925172ee97cb]
	I0814 09:53:53.887427    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:53:53.898120    4033 logs.go:276] 4 containers: [4e6f837b0ac6 b287389b4ec9 ba7f625babd1 178c997768be]
	I0814 09:53:53.898195    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:53:53.908113    4033 logs.go:276] 1 containers: [2f06dc386c61]
	I0814 09:53:53.908176    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:53:53.918570    4033 logs.go:276] 1 containers: [4ffe967a65fd]
	I0814 09:53:53.918638    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:53:53.929075    4033 logs.go:276] 1 containers: [2d9d0cc2c886]
	I0814 09:53:53.929145    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:53:53.939929    4033 logs.go:276] 0 containers: []
	W0814 09:53:53.939940    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:53:53.939999    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:53:53.950073    4033 logs.go:276] 1 containers: [cd7351187c71]
	I0814 09:53:53.950090    4033 logs.go:123] Gathering logs for kube-proxy [4ffe967a65fd] ...
	I0814 09:53:53.950095    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ffe967a65fd"
	I0814 09:53:53.962251    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:53:53.962262    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:53:53.997637    4033 logs.go:123] Gathering logs for kube-apiserver [741deb167866] ...
	I0814 09:53:53.997645    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 741deb167866"
	I0814 09:53:54.011737    4033 logs.go:123] Gathering logs for coredns [ba7f625babd1] ...
	I0814 09:53:54.011747    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba7f625babd1"
	I0814 09:53:54.023603    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:53:54.023613    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:53:54.035493    4033 logs.go:123] Gathering logs for etcd [925172ee97cb] ...
	I0814 09:53:54.035504    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925172ee97cb"
	I0814 09:53:54.049356    4033 logs.go:123] Gathering logs for coredns [4e6f837b0ac6] ...
	I0814 09:53:54.049366    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e6f837b0ac6"
	I0814 09:53:54.060771    4033 logs.go:123] Gathering logs for coredns [b287389b4ec9] ...
	I0814 09:53:54.060782    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b287389b4ec9"
	I0814 09:53:54.072426    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:53:54.072439    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:53:54.096084    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:53:54.096092    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:53:54.140504    4033 logs.go:123] Gathering logs for kube-scheduler [2f06dc386c61] ...
	I0814 09:53:54.140516    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f06dc386c61"
	I0814 09:53:54.156650    4033 logs.go:123] Gathering logs for kube-controller-manager [2d9d0cc2c886] ...
	I0814 09:53:54.156662    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9d0cc2c886"
	I0814 09:53:54.177991    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:53:54.178003    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:53:54.182707    4033 logs.go:123] Gathering logs for coredns [178c997768be] ...
	I0814 09:53:54.182714    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 178c997768be"
	I0814 09:53:54.194496    4033 logs.go:123] Gathering logs for storage-provisioner [cd7351187c71] ...
	I0814 09:53:54.194508    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7351187c71"
	I0814 09:53:56.713881    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:54:01.716278    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:54:01.716493    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:54:01.734133    4033 logs.go:276] 1 containers: [741deb167866]
	I0814 09:54:01.734223    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:54:01.747915    4033 logs.go:276] 1 containers: [925172ee97cb]
	I0814 09:54:01.747991    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:54:01.759904    4033 logs.go:276] 4 containers: [4e6f837b0ac6 b287389b4ec9 ba7f625babd1 178c997768be]
	I0814 09:54:01.759976    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:54:01.773297    4033 logs.go:276] 1 containers: [2f06dc386c61]
	I0814 09:54:01.773369    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:54:01.784205    4033 logs.go:276] 1 containers: [4ffe967a65fd]
	I0814 09:54:01.784275    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:54:01.795172    4033 logs.go:276] 1 containers: [2d9d0cc2c886]
	I0814 09:54:01.795246    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:54:01.805460    4033 logs.go:276] 0 containers: []
	W0814 09:54:01.805477    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:54:01.805538    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:54:01.816813    4033 logs.go:276] 1 containers: [cd7351187c71]
	I0814 09:54:01.816831    4033 logs.go:123] Gathering logs for kube-apiserver [741deb167866] ...
	I0814 09:54:01.816837    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 741deb167866"
	I0814 09:54:01.834057    4033 logs.go:123] Gathering logs for coredns [ba7f625babd1] ...
	I0814 09:54:01.834070    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba7f625babd1"
	I0814 09:54:01.845858    4033 logs.go:123] Gathering logs for kube-proxy [4ffe967a65fd] ...
	I0814 09:54:01.845869    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ffe967a65fd"
	I0814 09:54:01.857882    4033 logs.go:123] Gathering logs for kube-controller-manager [2d9d0cc2c886] ...
	I0814 09:54:01.857895    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9d0cc2c886"
	I0814 09:54:01.875530    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:54:01.875543    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:54:01.900664    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:54:01.900671    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:54:01.939059    4033 logs.go:123] Gathering logs for etcd [925172ee97cb] ...
	I0814 09:54:01.939069    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925172ee97cb"
	I0814 09:54:01.952857    4033 logs.go:123] Gathering logs for kube-scheduler [2f06dc386c61] ...
	I0814 09:54:01.952872    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f06dc386c61"
	I0814 09:54:01.968280    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:54:01.968289    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:54:02.003738    4033 logs.go:123] Gathering logs for coredns [4e6f837b0ac6] ...
	I0814 09:54:02.003746    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e6f837b0ac6"
	I0814 09:54:02.017097    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:54:02.017107    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:54:02.028793    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:54:02.028803    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:54:02.034274    4033 logs.go:123] Gathering logs for coredns [178c997768be] ...
	I0814 09:54:02.034282    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 178c997768be"
	I0814 09:54:02.047581    4033 logs.go:123] Gathering logs for storage-provisioner [cd7351187c71] ...
	I0814 09:54:02.047592    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7351187c71"
	I0814 09:54:02.059200    4033 logs.go:123] Gathering logs for coredns [b287389b4ec9] ...
	I0814 09:54:02.059211    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b287389b4ec9"
	I0814 09:54:04.572530    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:54:09.574675    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:54:09.574878    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:54:09.601333    4033 logs.go:276] 1 containers: [741deb167866]
	I0814 09:54:09.601415    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:54:09.614940    4033 logs.go:276] 1 containers: [925172ee97cb]
	I0814 09:54:09.615017    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:54:09.627430    4033 logs.go:276] 4 containers: [4e6f837b0ac6 b287389b4ec9 ba7f625babd1 178c997768be]
	I0814 09:54:09.627499    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:54:09.638436    4033 logs.go:276] 1 containers: [2f06dc386c61]
	I0814 09:54:09.638502    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:54:09.649158    4033 logs.go:276] 1 containers: [4ffe967a65fd]
	I0814 09:54:09.649232    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:54:09.659938    4033 logs.go:276] 1 containers: [2d9d0cc2c886]
	I0814 09:54:09.660029    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:54:09.670158    4033 logs.go:276] 0 containers: []
	W0814 09:54:09.670170    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:54:09.670232    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:54:09.680965    4033 logs.go:276] 1 containers: [cd7351187c71]
	I0814 09:54:09.680980    4033 logs.go:123] Gathering logs for coredns [4e6f837b0ac6] ...
	I0814 09:54:09.680985    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e6f837b0ac6"
	I0814 09:54:09.692096    4033 logs.go:123] Gathering logs for storage-provisioner [cd7351187c71] ...
	I0814 09:54:09.692106    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7351187c71"
	I0814 09:54:09.703618    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:54:09.703628    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:54:09.738457    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:54:09.738468    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:54:09.742745    4033 logs.go:123] Gathering logs for kube-apiserver [741deb167866] ...
	I0814 09:54:09.742752    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 741deb167866"
	I0814 09:54:09.757400    4033 logs.go:123] Gathering logs for etcd [925172ee97cb] ...
	I0814 09:54:09.757412    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925172ee97cb"
	I0814 09:54:09.771887    4033 logs.go:123] Gathering logs for coredns [ba7f625babd1] ...
	I0814 09:54:09.771896    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba7f625babd1"
	I0814 09:54:09.783407    4033 logs.go:123] Gathering logs for kube-scheduler [2f06dc386c61] ...
	I0814 09:54:09.783419    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f06dc386c61"
	I0814 09:54:09.798196    4033 logs.go:123] Gathering logs for kube-proxy [4ffe967a65fd] ...
	I0814 09:54:09.798207    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ffe967a65fd"
	I0814 09:54:09.810373    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:54:09.810386    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:54:09.822050    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:54:09.822061    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:54:09.857137    4033 logs.go:123] Gathering logs for coredns [b287389b4ec9] ...
	I0814 09:54:09.857147    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b287389b4ec9"
	I0814 09:54:09.869278    4033 logs.go:123] Gathering logs for coredns [178c997768be] ...
	I0814 09:54:09.869287    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 178c997768be"
	I0814 09:54:09.881618    4033 logs.go:123] Gathering logs for kube-controller-manager [2d9d0cc2c886] ...
	I0814 09:54:09.881630    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9d0cc2c886"
	I0814 09:54:09.898829    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:54:09.898839    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:54:12.425469    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:54:17.426076    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:54:17.426185    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:54:17.437496    4033 logs.go:276] 1 containers: [741deb167866]
	I0814 09:54:17.437571    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:54:17.452966    4033 logs.go:276] 1 containers: [925172ee97cb]
	I0814 09:54:17.453034    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:54:17.463596    4033 logs.go:276] 4 containers: [4e6f837b0ac6 b287389b4ec9 ba7f625babd1 178c997768be]
	I0814 09:54:17.463663    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:54:17.473993    4033 logs.go:276] 1 containers: [2f06dc386c61]
	I0814 09:54:17.474057    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:54:17.487227    4033 logs.go:276] 1 containers: [4ffe967a65fd]
	I0814 09:54:17.487295    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:54:17.497556    4033 logs.go:276] 1 containers: [2d9d0cc2c886]
	I0814 09:54:17.497622    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:54:17.508150    4033 logs.go:276] 0 containers: []
	W0814 09:54:17.508162    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:54:17.508228    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:54:17.523581    4033 logs.go:276] 1 containers: [cd7351187c71]
	I0814 09:54:17.523601    4033 logs.go:123] Gathering logs for coredns [4e6f837b0ac6] ...
	I0814 09:54:17.523607    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e6f837b0ac6"
	I0814 09:54:17.540463    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:54:17.540474    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:54:17.565843    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:54:17.565856    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:54:17.570098    4033 logs.go:123] Gathering logs for kube-scheduler [2f06dc386c61] ...
	I0814 09:54:17.570104    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f06dc386c61"
	I0814 09:54:17.584804    4033 logs.go:123] Gathering logs for kube-proxy [4ffe967a65fd] ...
	I0814 09:54:17.584817    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ffe967a65fd"
	I0814 09:54:17.596351    4033 logs.go:123] Gathering logs for storage-provisioner [cd7351187c71] ...
	I0814 09:54:17.596366    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7351187c71"
	I0814 09:54:17.607763    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:54:17.607777    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:54:17.619849    4033 logs.go:123] Gathering logs for coredns [178c997768be] ...
	I0814 09:54:17.619859    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 178c997768be"
	I0814 09:54:17.631643    4033 logs.go:123] Gathering logs for etcd [925172ee97cb] ...
	I0814 09:54:17.631657    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925172ee97cb"
	I0814 09:54:17.650828    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:54:17.650841    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:54:17.686550    4033 logs.go:123] Gathering logs for kube-apiserver [741deb167866] ...
	I0814 09:54:17.686561    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 741deb167866"
	I0814 09:54:17.700771    4033 logs.go:123] Gathering logs for coredns [b287389b4ec9] ...
	I0814 09:54:17.700784    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b287389b4ec9"
	I0814 09:54:17.712177    4033 logs.go:123] Gathering logs for coredns [ba7f625babd1] ...
	I0814 09:54:17.712188    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba7f625babd1"
	I0814 09:54:17.723741    4033 logs.go:123] Gathering logs for kube-controller-manager [2d9d0cc2c886] ...
	I0814 09:54:17.723752    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9d0cc2c886"
	I0814 09:54:17.741002    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:54:17.741012    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:54:20.276350    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:54:25.278502    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:54:25.278763    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:54:25.298786    4033 logs.go:276] 1 containers: [741deb167866]
	I0814 09:54:25.298883    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:54:25.318362    4033 logs.go:276] 1 containers: [925172ee97cb]
	I0814 09:54:25.318445    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:54:25.330246    4033 logs.go:276] 4 containers: [4e6f837b0ac6 b287389b4ec9 ba7f625babd1 178c997768be]
	I0814 09:54:25.330315    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:54:25.340587    4033 logs.go:276] 1 containers: [2f06dc386c61]
	I0814 09:54:25.340654    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:54:25.351562    4033 logs.go:276] 1 containers: [4ffe967a65fd]
	I0814 09:54:25.351630    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:54:25.363652    4033 logs.go:276] 1 containers: [2d9d0cc2c886]
	I0814 09:54:25.363723    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:54:25.376491    4033 logs.go:276] 0 containers: []
	W0814 09:54:25.376504    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:54:25.376565    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:54:25.387438    4033 logs.go:276] 1 containers: [cd7351187c71]
	I0814 09:54:25.387455    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:54:25.387461    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:54:25.422059    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:54:25.422073    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:54:25.426340    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:54:25.426346    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:54:25.468606    4033 logs.go:123] Gathering logs for kube-controller-manager [2d9d0cc2c886] ...
	I0814 09:54:25.468620    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9d0cc2c886"
	I0814 09:54:25.486305    4033 logs.go:123] Gathering logs for coredns [b287389b4ec9] ...
	I0814 09:54:25.486315    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b287389b4ec9"
	I0814 09:54:25.497716    4033 logs.go:123] Gathering logs for coredns [ba7f625babd1] ...
	I0814 09:54:25.497726    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba7f625babd1"
	I0814 09:54:25.510090    4033 logs.go:123] Gathering logs for kube-proxy [4ffe967a65fd] ...
	I0814 09:54:25.510103    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ffe967a65fd"
	I0814 09:54:25.522793    4033 logs.go:123] Gathering logs for kube-apiserver [741deb167866] ...
	I0814 09:54:25.522807    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 741deb167866"
	I0814 09:54:25.537907    4033 logs.go:123] Gathering logs for etcd [925172ee97cb] ...
	I0814 09:54:25.537922    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925172ee97cb"
	I0814 09:54:25.557483    4033 logs.go:123] Gathering logs for coredns [178c997768be] ...
	I0814 09:54:25.557491    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 178c997768be"
	I0814 09:54:25.569433    4033 logs.go:123] Gathering logs for kube-scheduler [2f06dc386c61] ...
	I0814 09:54:25.569448    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f06dc386c61"
	I0814 09:54:25.584321    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:54:25.584335    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:54:25.609842    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:54:25.609850    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:54:25.622450    4033 logs.go:123] Gathering logs for coredns [4e6f837b0ac6] ...
	I0814 09:54:25.622464    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e6f837b0ac6"
	I0814 09:54:25.635041    4033 logs.go:123] Gathering logs for storage-provisioner [cd7351187c71] ...
	I0814 09:54:25.635055    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7351187c71"
	I0814 09:54:28.147019    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:54:33.149152    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:54:33.149373    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:54:33.169876    4033 logs.go:276] 1 containers: [741deb167866]
	I0814 09:54:33.169978    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:54:33.186248    4033 logs.go:276] 1 containers: [925172ee97cb]
	I0814 09:54:33.186324    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:54:33.197720    4033 logs.go:276] 4 containers: [4e6f837b0ac6 b287389b4ec9 ba7f625babd1 178c997768be]
	I0814 09:54:33.197803    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:54:33.214307    4033 logs.go:276] 1 containers: [2f06dc386c61]
	I0814 09:54:33.214385    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:54:33.225051    4033 logs.go:276] 1 containers: [4ffe967a65fd]
	I0814 09:54:33.225119    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:54:33.235604    4033 logs.go:276] 1 containers: [2d9d0cc2c886]
	I0814 09:54:33.235667    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:54:33.246122    4033 logs.go:276] 0 containers: []
	W0814 09:54:33.246132    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:54:33.246208    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:54:33.256382    4033 logs.go:276] 1 containers: [cd7351187c71]
	I0814 09:54:33.256399    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:54:33.256404    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:54:33.289821    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:54:33.289830    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:54:33.326862    4033 logs.go:123] Gathering logs for kube-apiserver [741deb167866] ...
	I0814 09:54:33.326874    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 741deb167866"
	I0814 09:54:33.341736    4033 logs.go:123] Gathering logs for coredns [4e6f837b0ac6] ...
	I0814 09:54:33.341747    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e6f837b0ac6"
	I0814 09:54:33.353784    4033 logs.go:123] Gathering logs for coredns [b287389b4ec9] ...
	I0814 09:54:33.353794    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b287389b4ec9"
	I0814 09:54:33.366620    4033 logs.go:123] Gathering logs for coredns [178c997768be] ...
	I0814 09:54:33.366631    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 178c997768be"
	I0814 09:54:33.386548    4033 logs.go:123] Gathering logs for etcd [925172ee97cb] ...
	I0814 09:54:33.386562    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925172ee97cb"
	I0814 09:54:33.401107    4033 logs.go:123] Gathering logs for kube-controller-manager [2d9d0cc2c886] ...
	I0814 09:54:33.401117    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9d0cc2c886"
	I0814 09:54:33.419628    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:54:33.419639    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:54:33.424679    4033 logs.go:123] Gathering logs for kube-scheduler [2f06dc386c61] ...
	I0814 09:54:33.424686    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f06dc386c61"
	I0814 09:54:33.439585    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:54:33.439596    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:54:33.465211    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:54:33.465219    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:54:33.476941    4033 logs.go:123] Gathering logs for coredns [ba7f625babd1] ...
	I0814 09:54:33.476952    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba7f625babd1"
	I0814 09:54:33.489065    4033 logs.go:123] Gathering logs for kube-proxy [4ffe967a65fd] ...
	I0814 09:54:33.489076    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ffe967a65fd"
	I0814 09:54:33.508679    4033 logs.go:123] Gathering logs for storage-provisioner [cd7351187c71] ...
	I0814 09:54:33.508689    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7351187c71"
	I0814 09:54:36.028564    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:54:41.030642    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:54:41.030887    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:54:41.049840    4033 logs.go:276] 1 containers: [741deb167866]
	I0814 09:54:41.049931    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:54:41.062119    4033 logs.go:276] 1 containers: [925172ee97cb]
	I0814 09:54:41.062195    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:54:41.074676    4033 logs.go:276] 4 containers: [4e6f837b0ac6 b287389b4ec9 ba7f625babd1 178c997768be]
	I0814 09:54:41.074745    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:54:41.085236    4033 logs.go:276] 1 containers: [2f06dc386c61]
	I0814 09:54:41.085305    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:54:41.095792    4033 logs.go:276] 1 containers: [4ffe967a65fd]
	I0814 09:54:41.095854    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:54:41.106053    4033 logs.go:276] 1 containers: [2d9d0cc2c886]
	I0814 09:54:41.106127    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:54:41.120590    4033 logs.go:276] 0 containers: []
	W0814 09:54:41.120604    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:54:41.120667    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:54:41.131802    4033 logs.go:276] 1 containers: [cd7351187c71]
	I0814 09:54:41.131818    4033 logs.go:123] Gathering logs for kube-scheduler [2f06dc386c61] ...
	I0814 09:54:41.131823    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f06dc386c61"
	I0814 09:54:41.147047    4033 logs.go:123] Gathering logs for kube-proxy [4ffe967a65fd] ...
	I0814 09:54:41.147058    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ffe967a65fd"
	I0814 09:54:41.166691    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:54:41.166701    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:54:41.178277    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:54:41.178288    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:54:41.213219    4033 logs.go:123] Gathering logs for coredns [4e6f837b0ac6] ...
	I0814 09:54:41.213232    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e6f837b0ac6"
	I0814 09:54:41.225223    4033 logs.go:123] Gathering logs for coredns [178c997768be] ...
	I0814 09:54:41.225234    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 178c997768be"
	I0814 09:54:41.244862    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:54:41.244875    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:54:41.270288    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:54:41.270298    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:54:41.305323    4033 logs.go:123] Gathering logs for kube-apiserver [741deb167866] ...
	I0814 09:54:41.305336    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 741deb167866"
	I0814 09:54:41.321905    4033 logs.go:123] Gathering logs for etcd [925172ee97cb] ...
	I0814 09:54:41.321920    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925172ee97cb"
	I0814 09:54:41.336154    4033 logs.go:123] Gathering logs for coredns [ba7f625babd1] ...
	I0814 09:54:41.336170    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba7f625babd1"
	I0814 09:54:41.348199    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:54:41.348209    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:54:41.353052    4033 logs.go:123] Gathering logs for coredns [b287389b4ec9] ...
	I0814 09:54:41.353060    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b287389b4ec9"
	I0814 09:54:41.367950    4033 logs.go:123] Gathering logs for kube-controller-manager [2d9d0cc2c886] ...
	I0814 09:54:41.367963    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9d0cc2c886"
	I0814 09:54:41.385895    4033 logs.go:123] Gathering logs for storage-provisioner [cd7351187c71] ...
	I0814 09:54:41.385905    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7351187c71"
	I0814 09:54:43.901193    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:54:48.903235    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:54:48.903478    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:54:48.928799    4033 logs.go:276] 1 containers: [741deb167866]
	I0814 09:54:48.928908    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:54:48.945401    4033 logs.go:276] 1 containers: [925172ee97cb]
	I0814 09:54:48.945487    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:54:48.959177    4033 logs.go:276] 4 containers: [4e6f837b0ac6 b287389b4ec9 ba7f625babd1 178c997768be]
	I0814 09:54:48.959256    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:54:48.970725    4033 logs.go:276] 1 containers: [2f06dc386c61]
	I0814 09:54:48.970802    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:54:48.980830    4033 logs.go:276] 1 containers: [4ffe967a65fd]
	I0814 09:54:48.980900    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:54:48.991926    4033 logs.go:276] 1 containers: [2d9d0cc2c886]
	I0814 09:54:48.991995    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:54:49.001959    4033 logs.go:276] 0 containers: []
	W0814 09:54:49.001970    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:54:49.002027    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:54:49.012310    4033 logs.go:276] 1 containers: [cd7351187c71]
	I0814 09:54:49.012326    4033 logs.go:123] Gathering logs for etcd [925172ee97cb] ...
	I0814 09:54:49.012331    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925172ee97cb"
	I0814 09:54:49.026477    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:54:49.026488    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:54:49.061317    4033 logs.go:123] Gathering logs for coredns [4e6f837b0ac6] ...
	I0814 09:54:49.061330    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e6f837b0ac6"
	I0814 09:54:49.073552    4033 logs.go:123] Gathering logs for coredns [178c997768be] ...
	I0814 09:54:49.073567    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 178c997768be"
	I0814 09:54:49.085586    4033 logs.go:123] Gathering logs for kube-controller-manager [2d9d0cc2c886] ...
	I0814 09:54:49.085598    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9d0cc2c886"
	I0814 09:54:49.109459    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:54:49.109470    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:54:49.121640    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:54:49.121652    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:54:49.157125    4033 logs.go:123] Gathering logs for kube-apiserver [741deb167866] ...
	I0814 09:54:49.157135    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 741deb167866"
	I0814 09:54:49.171269    4033 logs.go:123] Gathering logs for kube-proxy [4ffe967a65fd] ...
	I0814 09:54:49.171279    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ffe967a65fd"
	I0814 09:54:49.182567    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:54:49.182578    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:54:49.186931    4033 logs.go:123] Gathering logs for coredns [ba7f625babd1] ...
	I0814 09:54:49.186938    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba7f625babd1"
	I0814 09:54:49.199002    4033 logs.go:123] Gathering logs for kube-scheduler [2f06dc386c61] ...
	I0814 09:54:49.199013    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f06dc386c61"
	I0814 09:54:49.214339    4033 logs.go:123] Gathering logs for storage-provisioner [cd7351187c71] ...
	I0814 09:54:49.214349    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7351187c71"
	I0814 09:54:49.226166    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:54:49.226176    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:54:49.249320    4033 logs.go:123] Gathering logs for coredns [b287389b4ec9] ...
	I0814 09:54:49.249328    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b287389b4ec9"
	I0814 09:54:51.762892    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:54:56.765185    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:54:56.765448    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:54:56.788070    4033 logs.go:276] 1 containers: [741deb167866]
	I0814 09:54:56.788165    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:54:56.803262    4033 logs.go:276] 1 containers: [925172ee97cb]
	I0814 09:54:56.803344    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:54:56.816062    4033 logs.go:276] 4 containers: [4e6f837b0ac6 b287389b4ec9 ba7f625babd1 178c997768be]
	I0814 09:54:56.816137    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:54:56.829680    4033 logs.go:276] 1 containers: [2f06dc386c61]
	I0814 09:54:56.829756    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:54:56.839988    4033 logs.go:276] 1 containers: [4ffe967a65fd]
	I0814 09:54:56.840061    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:54:56.850204    4033 logs.go:276] 1 containers: [2d9d0cc2c886]
	I0814 09:54:56.850277    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:54:56.862043    4033 logs.go:276] 0 containers: []
	W0814 09:54:56.862059    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:54:56.862121    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:54:56.876172    4033 logs.go:276] 1 containers: [cd7351187c71]
	I0814 09:54:56.876192    4033 logs.go:123] Gathering logs for coredns [b287389b4ec9] ...
	I0814 09:54:56.876198    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b287389b4ec9"
	I0814 09:54:56.888450    4033 logs.go:123] Gathering logs for coredns [4e6f837b0ac6] ...
	I0814 09:54:56.888461    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e6f837b0ac6"
	I0814 09:54:56.900442    4033 logs.go:123] Gathering logs for coredns [178c997768be] ...
	I0814 09:54:56.900451    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 178c997768be"
	I0814 09:54:56.912073    4033 logs.go:123] Gathering logs for kube-proxy [4ffe967a65fd] ...
	I0814 09:54:56.912084    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ffe967a65fd"
	I0814 09:54:56.946504    4033 logs.go:123] Gathering logs for storage-provisioner [cd7351187c71] ...
	I0814 09:54:56.946514    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7351187c71"
	I0814 09:54:56.958259    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:54:56.958273    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:54:56.991886    4033 logs.go:123] Gathering logs for kube-apiserver [741deb167866] ...
	I0814 09:54:56.991894    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 741deb167866"
	I0814 09:54:57.005986    4033 logs.go:123] Gathering logs for etcd [925172ee97cb] ...
	I0814 09:54:57.005995    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925172ee97cb"
	I0814 09:54:57.019883    4033 logs.go:123] Gathering logs for kube-scheduler [2f06dc386c61] ...
	I0814 09:54:57.019897    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f06dc386c61"
	I0814 09:54:57.035747    4033 logs.go:123] Gathering logs for kube-controller-manager [2d9d0cc2c886] ...
	I0814 09:54:57.035756    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9d0cc2c886"
	I0814 09:54:57.053496    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:54:57.053506    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:54:57.057693    4033 logs.go:123] Gathering logs for coredns [ba7f625babd1] ...
	I0814 09:54:57.057701    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba7f625babd1"
	I0814 09:54:57.069377    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:54:57.069390    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:54:57.093974    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:54:57.093982    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:54:57.105329    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:54:57.105340    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:54:59.642614    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:55:04.644643    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:55:04.644760    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:55:04.655928    4033 logs.go:276] 1 containers: [741deb167866]
	I0814 09:55:04.655995    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:55:04.667159    4033 logs.go:276] 1 containers: [925172ee97cb]
	I0814 09:55:04.667246    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:55:04.681773    4033 logs.go:276] 4 containers: [4e6f837b0ac6 b287389b4ec9 ba7f625babd1 178c997768be]
	I0814 09:55:04.681857    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:55:04.692711    4033 logs.go:276] 1 containers: [2f06dc386c61]
	I0814 09:55:04.692769    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:55:04.703287    4033 logs.go:276] 1 containers: [4ffe967a65fd]
	I0814 09:55:04.703346    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:55:04.714950    4033 logs.go:276] 1 containers: [2d9d0cc2c886]
	I0814 09:55:04.715007    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:55:04.725614    4033 logs.go:276] 0 containers: []
	W0814 09:55:04.725626    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:55:04.725684    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:55:04.736157    4033 logs.go:276] 1 containers: [cd7351187c71]
	I0814 09:55:04.736174    4033 logs.go:123] Gathering logs for coredns [ba7f625babd1] ...
	I0814 09:55:04.736179    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba7f625babd1"
	I0814 09:55:04.748393    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:55:04.748406    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:55:04.760885    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:55:04.760898    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:55:04.794721    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:55:04.794730    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:55:04.834387    4033 logs.go:123] Gathering logs for etcd [925172ee97cb] ...
	I0814 09:55:04.834398    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925172ee97cb"
	I0814 09:55:04.853059    4033 logs.go:123] Gathering logs for coredns [4e6f837b0ac6] ...
	I0814 09:55:04.853072    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e6f837b0ac6"
	I0814 09:55:04.865286    4033 logs.go:123] Gathering logs for kube-controller-manager [2d9d0cc2c886] ...
	I0814 09:55:04.865298    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9d0cc2c886"
	I0814 09:55:04.882379    4033 logs.go:123] Gathering logs for storage-provisioner [cd7351187c71] ...
	I0814 09:55:04.882389    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7351187c71"
	I0814 09:55:04.896276    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:55:04.896288    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:55:04.901171    4033 logs.go:123] Gathering logs for coredns [b287389b4ec9] ...
	I0814 09:55:04.901180    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b287389b4ec9"
	I0814 09:55:04.919373    4033 logs.go:123] Gathering logs for coredns [178c997768be] ...
	I0814 09:55:04.919387    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 178c997768be"
	I0814 09:55:04.931544    4033 logs.go:123] Gathering logs for kube-proxy [4ffe967a65fd] ...
	I0814 09:55:04.931558    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ffe967a65fd"
	I0814 09:55:04.943681    4033 logs.go:123] Gathering logs for kube-apiserver [741deb167866] ...
	I0814 09:55:04.943693    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 741deb167866"
	I0814 09:55:04.957962    4033 logs.go:123] Gathering logs for kube-scheduler [2f06dc386c61] ...
	I0814 09:55:04.957972    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f06dc386c61"
	I0814 09:55:04.973024    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:55:04.973034    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:55:07.498432    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:55:12.500573    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:55:12.500695    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:55:12.514094    4033 logs.go:276] 1 containers: [741deb167866]
	I0814 09:55:12.514163    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:55:12.527429    4033 logs.go:276] 1 containers: [925172ee97cb]
	I0814 09:55:12.527496    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:55:12.538087    4033 logs.go:276] 4 containers: [4e6f837b0ac6 b287389b4ec9 ba7f625babd1 178c997768be]
	I0814 09:55:12.538158    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:55:12.548754    4033 logs.go:276] 1 containers: [2f06dc386c61]
	I0814 09:55:12.548826    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:55:12.559083    4033 logs.go:276] 1 containers: [4ffe967a65fd]
	I0814 09:55:12.559144    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:55:12.572864    4033 logs.go:276] 1 containers: [2d9d0cc2c886]
	I0814 09:55:12.572929    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:55:12.588840    4033 logs.go:276] 0 containers: []
	W0814 09:55:12.588854    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:55:12.588909    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:55:12.599307    4033 logs.go:276] 1 containers: [cd7351187c71]
	I0814 09:55:12.599323    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:55:12.599328    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:55:12.633344    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:55:12.633351    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:55:12.668531    4033 logs.go:123] Gathering logs for kube-controller-manager [2d9d0cc2c886] ...
	I0814 09:55:12.668541    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9d0cc2c886"
	I0814 09:55:12.685986    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:55:12.685997    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:55:12.697882    4033 logs.go:123] Gathering logs for coredns [ba7f625babd1] ...
	I0814 09:55:12.697892    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba7f625babd1"
	I0814 09:55:12.710219    4033 logs.go:123] Gathering logs for coredns [178c997768be] ...
	I0814 09:55:12.710228    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 178c997768be"
	I0814 09:55:12.721875    4033 logs.go:123] Gathering logs for kube-scheduler [2f06dc386c61] ...
	I0814 09:55:12.721887    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f06dc386c61"
	I0814 09:55:12.736599    4033 logs.go:123] Gathering logs for kube-proxy [4ffe967a65fd] ...
	I0814 09:55:12.736608    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ffe967a65fd"
	I0814 09:55:12.748353    4033 logs.go:123] Gathering logs for storage-provisioner [cd7351187c71] ...
	I0814 09:55:12.748365    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7351187c71"
	I0814 09:55:12.760178    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:55:12.760189    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:55:12.765052    4033 logs.go:123] Gathering logs for etcd [925172ee97cb] ...
	I0814 09:55:12.765062    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925172ee97cb"
	I0814 09:55:12.779032    4033 logs.go:123] Gathering logs for coredns [b287389b4ec9] ...
	I0814 09:55:12.779041    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b287389b4ec9"
	I0814 09:55:12.790495    4033 logs.go:123] Gathering logs for kube-apiserver [741deb167866] ...
	I0814 09:55:12.790505    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 741deb167866"
	I0814 09:55:12.804235    4033 logs.go:123] Gathering logs for coredns [4e6f837b0ac6] ...
	I0814 09:55:12.804243    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e6f837b0ac6"
	I0814 09:55:12.815727    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:55:12.815737    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:55:15.342495    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:55:20.344611    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:55:20.344787    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:55:20.365998    4033 logs.go:276] 1 containers: [741deb167866]
	I0814 09:55:20.366092    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:55:20.380769    4033 logs.go:276] 1 containers: [925172ee97cb]
	I0814 09:55:20.380844    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:55:20.393744    4033 logs.go:276] 4 containers: [4e6f837b0ac6 b287389b4ec9 ba7f625babd1 178c997768be]
	I0814 09:55:20.393834    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:55:20.404670    4033 logs.go:276] 1 containers: [2f06dc386c61]
	I0814 09:55:20.404746    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:55:20.415329    4033 logs.go:276] 1 containers: [4ffe967a65fd]
	I0814 09:55:20.415397    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:55:20.425775    4033 logs.go:276] 1 containers: [2d9d0cc2c886]
	I0814 09:55:20.425844    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:55:20.441975    4033 logs.go:276] 0 containers: []
	W0814 09:55:20.441987    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:55:20.442044    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:55:20.453033    4033 logs.go:276] 1 containers: [cd7351187c71]
	I0814 09:55:20.453049    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:55:20.453055    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:55:20.488495    4033 logs.go:123] Gathering logs for etcd [925172ee97cb] ...
	I0814 09:55:20.488510    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925172ee97cb"
	I0814 09:55:20.510319    4033 logs.go:123] Gathering logs for coredns [ba7f625babd1] ...
	I0814 09:55:20.510334    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba7f625babd1"
	I0814 09:55:20.521848    4033 logs.go:123] Gathering logs for kube-scheduler [2f06dc386c61] ...
	I0814 09:55:20.521859    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f06dc386c61"
	I0814 09:55:20.536588    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:55:20.536600    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:55:20.549009    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:55:20.549021    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:55:20.553376    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:55:20.553386    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:55:20.587174    4033 logs.go:123] Gathering logs for coredns [4e6f837b0ac6] ...
	I0814 09:55:20.587190    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e6f837b0ac6"
	I0814 09:55:20.599599    4033 logs.go:123] Gathering logs for kube-proxy [4ffe967a65fd] ...
	I0814 09:55:20.599610    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ffe967a65fd"
	I0814 09:55:20.611413    4033 logs.go:123] Gathering logs for storage-provisioner [cd7351187c71] ...
	I0814 09:55:20.611425    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7351187c71"
	I0814 09:55:20.622654    4033 logs.go:123] Gathering logs for kube-apiserver [741deb167866] ...
	I0814 09:55:20.622666    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 741deb167866"
	I0814 09:55:20.636694    4033 logs.go:123] Gathering logs for coredns [b287389b4ec9] ...
	I0814 09:55:20.636709    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b287389b4ec9"
	I0814 09:55:20.648357    4033 logs.go:123] Gathering logs for coredns [178c997768be] ...
	I0814 09:55:20.648368    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 178c997768be"
	I0814 09:55:20.660988    4033 logs.go:123] Gathering logs for kube-controller-manager [2d9d0cc2c886] ...
	I0814 09:55:20.660998    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9d0cc2c886"
	I0814 09:55:20.679003    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:55:20.679018    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:55:23.205473    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:55:28.207571    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:55:28.210186    4033 out.go:177] 
	W0814 09:55:28.214564    4033 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0814 09:55:28.214575    4033 out.go:239] * 
	* 
	W0814 09:55:28.215279    4033 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 09:55:28.226451    4033 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-579000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-08-14 09:55:28.327959 -0700 PDT m=+2782.552854126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-579000 -n running-upgrade-579000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-579000 -n running-upgrade-579000: exit status 2 (15.570447667s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-579000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-625000 sudo                                | cilium-625000             | jenkins | v1.33.1 | 14 Aug 24 09:45 PDT |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-625000 sudo                                | cilium-625000             | jenkins | v1.33.1 | 14 Aug 24 09:45 PDT |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-625000 sudo cat                            | cilium-625000             | jenkins | v1.33.1 | 14 Aug 24 09:45 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-625000 sudo cat                            | cilium-625000             | jenkins | v1.33.1 | 14 Aug 24 09:45 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-625000 sudo                                | cilium-625000             | jenkins | v1.33.1 | 14 Aug 24 09:45 PDT |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-625000 sudo                                | cilium-625000             | jenkins | v1.33.1 | 14 Aug 24 09:45 PDT |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-625000 sudo                                | cilium-625000             | jenkins | v1.33.1 | 14 Aug 24 09:45 PDT |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-625000 sudo cat                            | cilium-625000             | jenkins | v1.33.1 | 14 Aug 24 09:45 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-625000 sudo cat                            | cilium-625000             | jenkins | v1.33.1 | 14 Aug 24 09:45 PDT |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-625000 sudo                                | cilium-625000             | jenkins | v1.33.1 | 14 Aug 24 09:45 PDT |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-625000 sudo                                | cilium-625000             | jenkins | v1.33.1 | 14 Aug 24 09:45 PDT |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-625000 sudo                                | cilium-625000             | jenkins | v1.33.1 | 14 Aug 24 09:45 PDT |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-625000 sudo find                           | cilium-625000             | jenkins | v1.33.1 | 14 Aug 24 09:45 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-625000 sudo crio                           | cilium-625000             | jenkins | v1.33.1 | 14 Aug 24 09:45 PDT |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-625000                                     | cilium-625000             | jenkins | v1.33.1 | 14 Aug 24 09:45 PDT | 14 Aug 24 09:45 PDT |
	| start   | -p kubernetes-upgrade-652000                         | kubernetes-upgrade-652000 | jenkins | v1.33.1 | 14 Aug 24 09:45 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p offline-docker-556000                             | offline-docker-556000     | jenkins | v1.33.1 | 14 Aug 24 09:45 PDT | 14 Aug 24 09:45 PDT |
	| stop    | -p kubernetes-upgrade-652000                         | kubernetes-upgrade-652000 | jenkins | v1.33.1 | 14 Aug 24 09:45 PDT | 14 Aug 24 09:45 PDT |
	| start   | -p stopped-upgrade-996000                            | minikube                  | jenkins | v1.26.0 | 14 Aug 24 09:45 PDT | 14 Aug 24 09:46 PDT |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                                    |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-652000                         | kubernetes-upgrade-652000 | jenkins | v1.33.1 | 14 Aug 24 09:45 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-652000                         | kubernetes-upgrade-652000 | jenkins | v1.33.1 | 14 Aug 24 09:45 PDT | 14 Aug 24 09:45 PDT |
	| start   | -p running-upgrade-579000                            | minikube                  | jenkins | v1.26.0 | 14 Aug 24 09:45 PDT | 14 Aug 24 09:46 PDT |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                                    |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-996000 stop                          | minikube                  | jenkins | v1.26.0 | 14 Aug 24 09:46 PDT | 14 Aug 24 09:46 PDT |
	| start   | -p stopped-upgrade-996000                            | stopped-upgrade-996000    | jenkins | v1.33.1 | 14 Aug 24 09:46 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-579000                            | running-upgrade-579000    | jenkins | v1.33.1 | 14 Aug 24 09:46 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/14 09:46:55
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 09:46:55.660923    4033 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:46:55.661090    4033 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:46:55.661095    4033 out.go:304] Setting ErrFile to fd 2...
	I0814 09:46:55.661097    4033 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:46:55.661242    4033 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:46:55.662764    4033 out.go:298] Setting JSON to false
	I0814 09:46:55.682080    4033 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2772,"bootTime":1723651243,"procs":498,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0814 09:46:55.682215    4033 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0814 09:46:55.686794    4033 out.go:177] * [running-upgrade-579000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0814 09:46:55.692853    4033 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 09:46:55.692891    4033 notify.go:220] Checking for updates...
	I0814 09:46:55.699826    4033 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	I0814 09:46:55.703801    4033 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0814 09:46:55.706799    4033 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 09:46:55.709806    4033 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	I0814 09:46:55.712831    4033 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 09:46:55.716165    4033 config.go:182] Loaded profile config "running-upgrade-579000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0814 09:46:55.719783    4033 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0814 09:46:55.722789    4033 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 09:46:55.726777    4033 out.go:177] * Using the qemu2 driver based on existing profile
	I0814 09:46:55.732783    4033 start.go:297] selected driver: qemu2
	I0814 09:46:55.732795    4033 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-579000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50343 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-579000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0814 09:46:55.732885    4033 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 09:46:55.735534    4033 cni.go:84] Creating CNI manager for ""
	I0814 09:46:55.735557    4033 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0814 09:46:55.735595    4033 start.go:340] cluster config:
	{Name:running-upgrade-579000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50343 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-579000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0814 09:46:55.735658    4033 iso.go:125] acquiring lock: {Name:mkda50f549908e1040ce4e19ea38376a2f640f3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:46:55.742769    4033 out.go:177] * Starting "running-upgrade-579000" primary control-plane node in "running-upgrade-579000" cluster
	I0814 09:46:55.746792    4033 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0814 09:46:55.746850    4033 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0814 09:46:55.746858    4033 cache.go:56] Caching tarball of preloaded images
	I0814 09:46:55.746938    4033 preload.go:172] Found /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0814 09:46:55.746944    4033 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0814 09:46:55.747003    4033 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/running-upgrade-579000/config.json ...
	I0814 09:46:55.747443    4033 start.go:360] acquireMachinesLock for running-upgrade-579000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:47:05.562944    4033 start.go:364] duration metric: took 9.816802875s to acquireMachinesLock for "running-upgrade-579000"
	I0814 09:47:05.562967    4033 start.go:96] Skipping create...Using existing machine configuration
	I0814 09:47:05.562970    4033 fix.go:54] fixHost starting: 
	I0814 09:47:05.563773    4033 fix.go:112] recreateIfNeeded on running-upgrade-579000: state=Running err=<nil>
	W0814 09:47:05.563783    4033 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 09:47:05.571959    4033 out.go:177] * Updating the running qemu2 "running-upgrade-579000" VM ...
	I0814 09:47:05.575891    4033 machine.go:94] provisionDockerMachine start ...
	I0814 09:47:05.575939    4033 main.go:141] libmachine: Using SSH client type: native
	I0814 09:47:05.576054    4033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104aa85a0] 0x104aaae00 <nil>  [] 0s} localhost 50274 <nil> <nil>}
	I0814 09:47:05.576058    4033 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 09:47:05.638406    4033 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-579000
	
	I0814 09:47:05.638422    4033 buildroot.go:166] provisioning hostname "running-upgrade-579000"
	I0814 09:47:05.638446    4033 main.go:141] libmachine: Using SSH client type: native
	I0814 09:47:05.638567    4033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104aa85a0] 0x104aaae00 <nil>  [] 0s} localhost 50274 <nil> <nil>}
	I0814 09:47:05.638574    4033 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-579000 && echo "running-upgrade-579000" | sudo tee /etc/hostname
	I0814 09:47:04.658948    4019 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/stopped-upgrade-996000/config.json ...
	I0814 09:47:04.659165    4019 machine.go:94] provisionDockerMachine start ...
	I0814 09:47:04.659208    4019 main.go:141] libmachine: Using SSH client type: native
	I0814 09:47:04.659349    4019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1013505a0] 0x101352e00 <nil>  [] 0s} localhost 50234 <nil> <nil>}
	I0814 09:47:04.659354    4019 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 09:47:04.722965    4019 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0814 09:47:04.722980    4019 buildroot.go:166] provisioning hostname "stopped-upgrade-996000"
	I0814 09:47:04.723027    4019 main.go:141] libmachine: Using SSH client type: native
	I0814 09:47:04.723137    4019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1013505a0] 0x101352e00 <nil>  [] 0s} localhost 50234 <nil> <nil>}
	I0814 09:47:04.723143    4019 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-996000 && echo "stopped-upgrade-996000" | sudo tee /etc/hostname
	I0814 09:47:04.790976    4019 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-996000
	
	I0814 09:47:04.791033    4019 main.go:141] libmachine: Using SSH client type: native
	I0814 09:47:04.791161    4019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1013505a0] 0x101352e00 <nil>  [] 0s} localhost 50234 <nil> <nil>}
	I0814 09:47:04.791170    4019 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-996000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-996000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-996000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 09:47:04.857575    4019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 09:47:04.857587    4019 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19446-1067/.minikube CaCertPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19446-1067/.minikube}
	I0814 09:47:04.857599    4019 buildroot.go:174] setting up certificates
	I0814 09:47:04.857604    4019 provision.go:84] configureAuth start
	I0814 09:47:04.857608    4019 provision.go:143] copyHostCerts
	I0814 09:47:04.857695    4019 exec_runner.go:144] found /Users/jenkins/minikube-integration/19446-1067/.minikube/cert.pem, removing ...
	I0814 09:47:04.857702    4019 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19446-1067/.minikube/cert.pem
	I0814 09:47:04.857796    4019 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19446-1067/.minikube/cert.pem (1123 bytes)
	I0814 09:47:04.857973    4019 exec_runner.go:144] found /Users/jenkins/minikube-integration/19446-1067/.minikube/key.pem, removing ...
	I0814 09:47:04.857978    4019 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19446-1067/.minikube/key.pem
	I0814 09:47:04.858022    4019 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19446-1067/.minikube/key.pem (1675 bytes)
	I0814 09:47:04.858119    4019 exec_runner.go:144] found /Users/jenkins/minikube-integration/19446-1067/.minikube/ca.pem, removing ...
	I0814 09:47:04.858123    4019 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19446-1067/.minikube/ca.pem
	I0814 09:47:04.858160    4019 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19446-1067/.minikube/ca.pem (1082 bytes)
	I0814 09:47:04.858250    4019 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-996000 san=[127.0.0.1 localhost minikube stopped-upgrade-996000]
	I0814 09:47:04.928399    4019 provision.go:177] copyRemoteCerts
	I0814 09:47:04.928435    4019 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 09:47:04.928444    4019 sshutil.go:53] new ssh client: &{IP:localhost Port:50234 SSHKeyPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/stopped-upgrade-996000/id_rsa Username:docker}
	I0814 09:47:04.962728    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0814 09:47:04.969989    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0814 09:47:04.977192    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0814 09:47:04.984072    4019 provision.go:87] duration metric: took 126.473791ms to configureAuth
	I0814 09:47:04.984083    4019 buildroot.go:189] setting minikube options for container-runtime
	I0814 09:47:04.984185    4019 config.go:182] Loaded profile config "stopped-upgrade-996000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0814 09:47:04.984228    4019 main.go:141] libmachine: Using SSH client type: native
	I0814 09:47:04.984313    4019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1013505a0] 0x101352e00 <nil>  [] 0s} localhost 50234 <nil> <nil>}
	I0814 09:47:04.984317    4019 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0814 09:47:05.051946    4019 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0814 09:47:05.051960    4019 buildroot.go:70] root file system type: tmpfs
	I0814 09:47:05.052028    4019 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0814 09:47:05.052094    4019 main.go:141] libmachine: Using SSH client type: native
	I0814 09:47:05.052225    4019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1013505a0] 0x101352e00 <nil>  [] 0s} localhost 50234 <nil> <nil>}
	I0814 09:47:05.052258    4019 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0814 09:47:05.119257    4019 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0814 09:47:05.119305    4019 main.go:141] libmachine: Using SSH client type: native
	I0814 09:47:05.119420    4019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1013505a0] 0x101352e00 <nil>  [] 0s} localhost 50234 <nil> <nil>}
	I0814 09:47:05.119429    4019 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0814 09:47:05.450423    4019 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0814 09:47:05.450435    4019 machine.go:97] duration metric: took 791.351333ms to provisionDockerMachine
	I0814 09:47:05.450441    4019 start.go:293] postStartSetup for "stopped-upgrade-996000" (driver="qemu2")
	I0814 09:47:05.450448    4019 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 09:47:05.450512    4019 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 09:47:05.450521    4019 sshutil.go:53] new ssh client: &{IP:localhost Port:50234 SSHKeyPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/stopped-upgrade-996000/id_rsa Username:docker}
	I0814 09:47:05.485426    4019 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 09:47:05.486827    4019 info.go:137] Remote host: Buildroot 2021.02.12
	I0814 09:47:05.486837    4019 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19446-1067/.minikube/addons for local assets ...
	I0814 09:47:05.486933    4019 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19446-1067/.minikube/files for local assets ...
	I0814 09:47:05.487023    4019 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19446-1067/.minikube/files/etc/ssl/certs/16002.pem -> 16002.pem in /etc/ssl/certs
	I0814 09:47:05.487120    4019 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 09:47:05.490187    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/files/etc/ssl/certs/16002.pem --> /etc/ssl/certs/16002.pem (1708 bytes)
	I0814 09:47:05.497498    4019 start.go:296] duration metric: took 47.056541ms for postStartSetup
	I0814 09:47:05.497511    4019 fix.go:56] duration metric: took 20.938155209s for fixHost
	I0814 09:47:05.497546    4019 main.go:141] libmachine: Using SSH client type: native
	I0814 09:47:05.497660    4019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1013505a0] 0x101352e00 <nil>  [] 0s} localhost 50234 <nil> <nil>}
	I0814 09:47:05.497664    4019 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 09:47:05.562873    4019 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723654025.168279587
	
	I0814 09:47:05.562881    4019 fix.go:216] guest clock: 1723654025.168279587
	I0814 09:47:05.562886    4019 fix.go:229] Guest: 2024-08-14 09:47:05.168279587 -0700 PDT Remote: 2024-08-14 09:47:05.497513 -0700 PDT m=+21.044678292 (delta=-329.233413ms)
	I0814 09:47:05.562903    4019 fix.go:200] guest clock delta is within tolerance: -329.233413ms
	I0814 09:47:05.562905    4019 start.go:83] releasing machines lock for "stopped-upgrade-996000", held for 21.003565916s
	I0814 09:47:05.562976    4019 ssh_runner.go:195] Run: cat /version.json
	I0814 09:47:05.562984    4019 sshutil.go:53] new ssh client: &{IP:localhost Port:50234 SSHKeyPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/stopped-upgrade-996000/id_rsa Username:docker}
	I0814 09:47:05.562990    4019 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 09:47:05.563006    4019 sshutil.go:53] new ssh client: &{IP:localhost Port:50234 SSHKeyPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/stopped-upgrade-996000/id_rsa Username:docker}
	W0814 09:47:05.563696    4019 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:50423->127.0.0.1:50234: write: broken pipe
	I0814 09:47:05.563715    4019 retry.go:31] will retry after 136.235677ms: ssh: handshake failed: write tcp 127.0.0.1:50423->127.0.0.1:50234: write: broken pipe
	W0814 09:47:05.595412    4019 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0814 09:47:05.595474    4019 ssh_runner.go:195] Run: systemctl --version
	I0814 09:47:05.597153    4019 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 09:47:05.598904    4019 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 09:47:05.598935    4019 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0814 09:47:05.601701    4019 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0814 09:47:05.605976    4019 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 09:47:05.605986    4019 start.go:495] detecting cgroup driver to use...
	I0814 09:47:05.606054    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 09:47:05.612907    4019 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0814 09:47:05.615891    4019 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0814 09:47:05.618628    4019 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0814 09:47:05.618654    4019 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0814 09:47:05.622274    4019 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0814 09:47:05.625251    4019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0814 09:47:05.628434    4019 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0814 09:47:05.631311    4019 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 09:47:05.634574    4019 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0814 09:47:05.638248    4019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0814 09:47:05.642003    4019 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0814 09:47:05.645870    4019 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 09:47:05.648998    4019 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 09:47:05.651550    4019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 09:47:05.734158    4019 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0814 09:47:05.741556    4019 start.go:495] detecting cgroup driver to use...
	I0814 09:47:05.741636    4019 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0814 09:47:05.748228    4019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 09:47:05.754661    4019 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 09:47:05.765392    4019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 09:47:05.808142    4019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0814 09:47:05.812845    4019 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0814 09:47:05.853960    4019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0814 09:47:05.859506    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 09:47:05.865377    4019 ssh_runner.go:195] Run: which cri-dockerd
	I0814 09:47:05.866914    4019 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0814 09:47:05.870001    4019 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0814 09:47:05.875699    4019 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0814 09:47:05.935593    4019 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0814 09:47:06.005102    4019 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0814 09:47:06.005177    4019 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0814 09:47:06.011039    4019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 09:47:06.098560    4019 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0814 09:47:07.225386    4019 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.126918041s)
	I0814 09:47:07.225516    4019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0814 09:47:07.230841    4019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0814 09:47:07.236448    4019 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0814 09:47:07.318098    4019 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0814 09:47:07.403116    4019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 09:47:07.487797    4019 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0814 09:47:07.494838    4019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0814 09:47:07.500099    4019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 09:47:07.573732    4019 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0814 09:47:07.617793    4019 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0814 09:47:07.617951    4019 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0814 09:47:07.620661    4019 start.go:563] Will wait 60s for crictl version
	I0814 09:47:07.620705    4019 ssh_runner.go:195] Run: which crictl
	I0814 09:47:07.622343    4019 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 09:47:07.637237    4019 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0814 09:47:07.637297    4019 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0814 09:47:07.653421    4019 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0814 09:47:07.675913    4019 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0814 09:47:07.675978    4019 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0814 09:47:07.677322    4019 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 09:47:07.681334    4019 kubeadm.go:883] updating cluster {Name:stopped-upgrade-996000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50269 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-996000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0814 09:47:07.681390    4019 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0814 09:47:07.681433    4019 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0814 09:47:07.692379    4019 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0814 09:47:07.692388    4019 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0814 09:47:07.692435    4019 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0814 09:47:07.695526    4019 ssh_runner.go:195] Run: which lz4
	I0814 09:47:07.696760    4019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0814 09:47:07.697990    4019 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 09:47:07.698002    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0814 09:47:08.644019    4019 docker.go:649] duration metric: took 947.378666ms to copy over tarball
	I0814 09:47:08.644083    4019 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 09:47:05.703790    4033 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-579000
	
	I0814 09:47:05.703845    4033 main.go:141] libmachine: Using SSH client type: native
	I0814 09:47:05.703968    4033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104aa85a0] 0x104aaae00 <nil>  [] 0s} localhost 50274 <nil> <nil>}
	I0814 09:47:05.703978    4033 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-579000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-579000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-579000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 09:47:05.764390    4033 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 09:47:05.764403    4033 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19446-1067/.minikube CaCertPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19446-1067/.minikube}
	I0814 09:47:05.764411    4033 buildroot.go:174] setting up certificates
	I0814 09:47:05.764415    4033 provision.go:84] configureAuth start
	I0814 09:47:05.764419    4033 provision.go:143] copyHostCerts
	I0814 09:47:05.764492    4033 exec_runner.go:144] found /Users/jenkins/minikube-integration/19446-1067/.minikube/ca.pem, removing ...
	I0814 09:47:05.764499    4033 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19446-1067/.minikube/ca.pem
	I0814 09:47:05.764617    4033 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19446-1067/.minikube/ca.pem (1082 bytes)
	I0814 09:47:05.764803    4033 exec_runner.go:144] found /Users/jenkins/minikube-integration/19446-1067/.minikube/cert.pem, removing ...
	I0814 09:47:05.764807    4033 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19446-1067/.minikube/cert.pem
	I0814 09:47:05.764856    4033 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19446-1067/.minikube/cert.pem (1123 bytes)
	I0814 09:47:05.764962    4033 exec_runner.go:144] found /Users/jenkins/minikube-integration/19446-1067/.minikube/key.pem, removing ...
	I0814 09:47:05.764966    4033 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19446-1067/.minikube/key.pem
	I0814 09:47:05.765005    4033 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19446-1067/.minikube/key.pem (1675 bytes)
	I0814 09:47:05.765096    4033 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-579000 san=[127.0.0.1 localhost minikube running-upgrade-579000]
	I0814 09:47:05.940070    4033 provision.go:177] copyRemoteCerts
	I0814 09:47:05.940114    4033 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 09:47:05.940124    4033 sshutil.go:53] new ssh client: &{IP:localhost Port:50274 SSHKeyPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/running-upgrade-579000/id_rsa Username:docker}
	I0814 09:47:05.972328    4033 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0814 09:47:05.979486    4033 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0814 09:47:05.986540    4033 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0814 09:47:05.994326    4033 provision.go:87] duration metric: took 229.921834ms to configureAuth
	I0814 09:47:05.994338    4033 buildroot.go:189] setting minikube options for container-runtime
	I0814 09:47:05.994451    4033 config.go:182] Loaded profile config "running-upgrade-579000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0814 09:47:05.994486    4033 main.go:141] libmachine: Using SSH client type: native
	I0814 09:47:05.994588    4033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104aa85a0] 0x104aaae00 <nil>  [] 0s} localhost 50274 <nil> <nil>}
	I0814 09:47:05.994592    4033 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0814 09:47:06.056516    4033 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0814 09:47:06.056527    4033 buildroot.go:70] root file system type: tmpfs
	I0814 09:47:06.056577    4033 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0814 09:47:06.056624    4033 main.go:141] libmachine: Using SSH client type: native
	I0814 09:47:06.056748    4033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104aa85a0] 0x104aaae00 <nil>  [] 0s} localhost 50274 <nil> <nil>}
	I0814 09:47:06.056783    4033 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0814 09:47:06.121758    4033 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0814 09:47:06.121810    4033 main.go:141] libmachine: Using SSH client type: native
	I0814 09:47:06.121930    4033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104aa85a0] 0x104aaae00 <nil>  [] 0s} localhost 50274 <nil> <nil>}
	I0814 09:47:06.121938    4033 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0814 09:47:06.182300    4033 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 09:47:06.182313    4033 machine.go:97] duration metric: took 606.481834ms to provisionDockerMachine
	I0814 09:47:06.182318    4033 start.go:293] postStartSetup for "running-upgrade-579000" (driver="qemu2")
	I0814 09:47:06.182325    4033 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 09:47:06.182380    4033 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 09:47:06.182389    4033 sshutil.go:53] new ssh client: &{IP:localhost Port:50274 SSHKeyPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/running-upgrade-579000/id_rsa Username:docker}
	I0814 09:47:06.238783    4033 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 09:47:06.240250    4033 info.go:137] Remote host: Buildroot 2021.02.12
	I0814 09:47:06.240261    4033 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19446-1067/.minikube/addons for local assets ...
	I0814 09:47:06.240341    4033 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19446-1067/.minikube/files for local assets ...
	I0814 09:47:06.240434    4033 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19446-1067/.minikube/files/etc/ssl/certs/16002.pem -> 16002.pem in /etc/ssl/certs
	I0814 09:47:06.240528    4033 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 09:47:06.243411    4033 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/files/etc/ssl/certs/16002.pem --> /etc/ssl/certs/16002.pem (1708 bytes)
	I0814 09:47:06.252561    4033 start.go:296] duration metric: took 70.242792ms for postStartSetup
	I0814 09:47:06.252594    4033 fix.go:56] duration metric: took 689.686458ms for fixHost
	I0814 09:47:06.252642    4033 main.go:141] libmachine: Using SSH client type: native
	I0814 09:47:06.252761    4033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104aa85a0] 0x104aaae00 <nil>  [] 0s} localhost 50274 <nil> <nil>}
	I0814 09:47:06.252767    4033 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 09:47:06.318323    4033 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723654026.438696102
	
	I0814 09:47:06.318337    4033 fix.go:216] guest clock: 1723654026.438696102
	I0814 09:47:06.318342    4033 fix.go:229] Guest: 2024-08-14 09:47:06.438696102 -0700 PDT Remote: 2024-08-14 09:47:06.252596 -0700 PDT m=+10.617043792 (delta=186.100102ms)
	I0814 09:47:06.318361    4033 fix.go:200] guest clock delta is within tolerance: 186.100102ms
	I0814 09:47:06.318364    4033 start.go:83] releasing machines lock for "running-upgrade-579000", held for 755.491333ms
	I0814 09:47:06.318436    4033 ssh_runner.go:195] Run: cat /version.json
	I0814 09:47:06.318450    4033 sshutil.go:53] new ssh client: &{IP:localhost Port:50274 SSHKeyPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/running-upgrade-579000/id_rsa Username:docker}
	I0814 09:47:06.318436    4033 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 09:47:06.318474    4033 sshutil.go:53] new ssh client: &{IP:localhost Port:50274 SSHKeyPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/running-upgrade-579000/id_rsa Username:docker}
	W0814 09:47:06.319126    4033 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:50464->127.0.0.1:50274: write: broken pipe
	I0814 09:47:06.319138    4033 retry.go:31] will retry after 260.071303ms: ssh: handshake failed: write tcp 127.0.0.1:50464->127.0.0.1:50274: write: broken pipe
	W0814 09:47:06.615411    4033 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0814 09:47:06.615479    4033 ssh_runner.go:195] Run: systemctl --version
	I0814 09:47:06.617704    4033 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 09:47:06.619457    4033 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 09:47:06.619483    4033 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0814 09:47:06.623014    4033 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0814 09:47:06.627971    4033 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 09:47:06.627980    4033 start.go:495] detecting cgroup driver to use...
	I0814 09:47:06.628046    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 09:47:06.633541    4033 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0814 09:47:06.637381    4033 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0814 09:47:06.640652    4033 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0814 09:47:06.640688    4033 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0814 09:47:06.643980    4033 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0814 09:47:06.647070    4033 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0814 09:47:06.650442    4033 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0814 09:47:06.653427    4033 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 09:47:06.658252    4033 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0814 09:47:06.661301    4033 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0814 09:47:06.664286    4033 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0814 09:47:06.667336    4033 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 09:47:06.670273    4033 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 09:47:06.672814    4033 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 09:47:06.767246    4033 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0814 09:47:06.779385    4033 start.go:495] detecting cgroup driver to use...
	I0814 09:47:06.779459    4033 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0814 09:47:06.785008    4033 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 09:47:06.799321    4033 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 09:47:06.806550    4033 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 09:47:06.811445    4033 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0814 09:47:06.815791    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 09:47:06.821538    4033 ssh_runner.go:195] Run: which cri-dockerd
	I0814 09:47:06.822771    4033 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0814 09:47:06.825319    4033 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0814 09:47:06.829840    4033 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0814 09:47:06.923236    4033 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0814 09:47:07.037319    4033 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0814 09:47:07.037379    4033 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0814 09:47:07.048106    4033 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 09:47:07.144984    4033 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0814 09:47:10.830204    4033 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.685556792s)
	I0814 09:47:10.830274    4033 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0814 09:47:10.836884    4033 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0814 09:47:10.842962    4033 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0814 09:47:10.847188    4033 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0814 09:47:10.929750    4033 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0814 09:47:11.014324    4033 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 09:47:11.101554    4033 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0814 09:47:11.108145    4033 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0814 09:47:11.112555    4033 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 09:47:11.195560    4033 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0814 09:47:11.242055    4033 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0814 09:47:11.242135    4033 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0814 09:47:11.244441    4033 start.go:563] Will wait 60s for crictl version
	I0814 09:47:11.244493    4033 ssh_runner.go:195] Run: which crictl
	I0814 09:47:11.245835    4033 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 09:47:11.258194    4033 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0814 09:47:11.258266    4033 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0814 09:47:11.271333    4033 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0814 09:47:09.800990    4019 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.157003291s)
	I0814 09:47:09.801003    4019 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 09:47:09.816708    4019 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0814 09:47:09.819691    4019 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0814 09:47:09.824798    4019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 09:47:09.903171    4019 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0814 09:47:11.428095    4019 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.525043458s)
	I0814 09:47:11.428185    4019 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0814 09:47:11.444011    4019 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0814 09:47:11.444020    4019 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0814 09:47:11.444025    4019 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0814 09:47:11.448065    4019 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 09:47:11.449920    4019 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0814 09:47:11.452136    4019 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 09:47:11.452219    4019 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0814 09:47:11.454772    4019 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0814 09:47:11.454799    4019 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0814 09:47:11.456497    4019 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0814 09:47:11.456627    4019 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0814 09:47:11.458512    4019 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0814 09:47:11.459321    4019 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0814 09:47:11.459866    4019 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0814 09:47:11.459897    4019 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0814 09:47:11.463664    4019 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0814 09:47:11.463672    4019 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0814 09:47:11.463676    4019 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0814 09:47:11.465743    4019 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0814 09:47:11.917773    4019 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0814 09:47:11.929413    4019 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0814 09:47:11.929442    4019 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0814 09:47:11.929516    4019 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0814 09:47:11.936769    4019 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0814 09:47:11.943888    4019 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0814 09:47:11.951749    4019 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0814 09:47:11.951770    4019 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0814 09:47:11.951889    4019 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0814 09:47:11.964195    4019 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0814 09:47:11.964544    4019 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0814 09:47:11.976593    4019 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0814 09:47:11.976618    4019 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0814 09:47:11.976676    4019 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0814 09:47:11.981071    4019 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0814 09:47:11.990611    4019 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0814 09:47:11.990735    4019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0814 09:47:11.993932    4019 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0814 09:47:11.993956    4019 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0814 09:47:11.993963    4019 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0814 09:47:11.993998    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0814 09:47:11.994016    4019 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0814 09:47:12.002104    4019 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0814 09:47:12.002127    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0814 09:47:12.003769    4019 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	W0814 09:47:12.013676    4019 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0814 09:47:12.013798    4019 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0814 09:47:12.019790    4019 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0814 09:47:12.049134    4019 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0814 09:47:12.049186    4019 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0814 09:47:12.049201    4019 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0814 09:47:12.049272    4019 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0814 09:47:12.049305    4019 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0814 09:47:12.049312    4019 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0814 09:47:12.049332    4019 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0814 09:47:12.058516    4019 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0814 09:47:12.074115    4019 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0814 09:47:12.074288    4019 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0814 09:47:12.074389    4019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0814 09:47:12.082150    4019 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0814 09:47:12.082148    4019 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0814 09:47:12.082183    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0814 09:47:12.082186    4019 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0814 09:47:12.082238    4019 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	W0814 09:47:12.091303    4019 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0814 09:47:12.091418    4019 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 09:47:12.104265    4019 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0814 09:47:12.104407    4019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0814 09:47:12.126053    4019 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0814 09:47:12.126082    4019 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 09:47:12.126081    4019 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0814 09:47:12.126109    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0814 09:47:12.126135    4019 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 09:47:12.161734    4019 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0814 09:47:12.161750    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0814 09:47:12.187022    4019 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0814 09:47:12.187158    4019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0814 09:47:12.264746    4019 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0814 09:47:12.264799    4019 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0814 09:47:12.264825    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0814 09:47:12.337275    4019 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0814 09:47:12.337302    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0814 09:47:12.722425    4019 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0814 09:47:12.722492    4019 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0814 09:47:12.722525    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0814 09:47:12.875862    4019 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0814 09:47:12.875909    4019 cache_images.go:92] duration metric: took 1.432000333s to LoadCachedImages
	W0814 09:47:12.875958    4019 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0814 09:47:12.875963    4019 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0814 09:47:12.876024    4019 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-996000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-996000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 09:47:12.876125    4019 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0814 09:47:12.890457    4019 cni.go:84] Creating CNI manager for ""
	I0814 09:47:12.890473    4019 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0814 09:47:12.890478    4019 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 09:47:12.890486    4019 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-996000 NodeName:stopped-upgrade-996000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 09:47:12.890545    4019 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-996000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 09:47:12.890607    4019 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0814 09:47:12.893926    4019 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 09:47:12.893983    4019 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 09:47:12.897648    4019 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0814 09:47:12.903435    4019 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 09:47:12.909254    4019 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0814 09:47:12.915225    4019 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0814 09:47:12.916637    4019 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 09:47:12.921115    4019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 09:47:12.989293    4019 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 09:47:12.995784    4019 certs.go:68] Setting up /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/stopped-upgrade-996000 for IP: 10.0.2.15
	I0814 09:47:12.995796    4019 certs.go:194] generating shared ca certs ...
	I0814 09:47:12.995805    4019 certs.go:226] acquiring lock for ca certs: {Name:mk41737d7568a132ec38012a87fa9d3345f331c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:47:12.995985    4019 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19446-1067/.minikube/ca.key
	I0814 09:47:12.996035    4019 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19446-1067/.minikube/proxy-client-ca.key
	I0814 09:47:12.996041    4019 certs.go:256] generating profile certs ...
	I0814 09:47:12.996113    4019 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/stopped-upgrade-996000/client.key
	I0814 09:47:12.996131    4019 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/stopped-upgrade-996000/apiserver.key.1b5cac53
	I0814 09:47:12.996144    4019 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/stopped-upgrade-996000/apiserver.crt.1b5cac53 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0814 09:47:13.283174    4019 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/stopped-upgrade-996000/apiserver.crt.1b5cac53 ...
	I0814 09:47:13.283190    4019 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/stopped-upgrade-996000/apiserver.crt.1b5cac53: {Name:mk34f4324d8adb08a602260706cc47dfde65af01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:47:13.283504    4019 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/stopped-upgrade-996000/apiserver.key.1b5cac53 ...
	I0814 09:47:13.283510    4019 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/stopped-upgrade-996000/apiserver.key.1b5cac53: {Name:mk3dc04017450c5ab8112180685b26cf9d4c5148 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:47:13.283650    4019 certs.go:381] copying /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/stopped-upgrade-996000/apiserver.crt.1b5cac53 -> /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/stopped-upgrade-996000/apiserver.crt
	I0814 09:47:13.283801    4019 certs.go:385] copying /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/stopped-upgrade-996000/apiserver.key.1b5cac53 -> /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/stopped-upgrade-996000/apiserver.key
	I0814 09:47:13.283969    4019 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/stopped-upgrade-996000/proxy-client.key
	I0814 09:47:13.284101    4019 certs.go:484] found cert: /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/1600.pem (1338 bytes)
	W0814 09:47:13.284133    4019 certs.go:480] ignoring /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/1600_empty.pem, impossibly tiny 0 bytes
	I0814 09:47:13.284139    4019 certs.go:484] found cert: /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca-key.pem (1675 bytes)
	I0814 09:47:13.284165    4019 certs.go:484] found cert: /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem (1082 bytes)
	I0814 09:47:13.284192    4019 certs.go:484] found cert: /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/cert.pem (1123 bytes)
	I0814 09:47:13.284225    4019 certs.go:484] found cert: /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/key.pem (1675 bytes)
	I0814 09:47:13.284277    4019 certs.go:484] found cert: /Users/jenkins/minikube-integration/19446-1067/.minikube/files/etc/ssl/certs/16002.pem (1708 bytes)
	I0814 09:47:13.284674    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 09:47:13.293162    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 09:47:13.301683    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 09:47:13.313966    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0814 09:47:13.322138    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/stopped-upgrade-996000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0814 09:47:13.330365    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/stopped-upgrade-996000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0814 09:47:13.338617    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/stopped-upgrade-996000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 09:47:13.346628    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/stopped-upgrade-996000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 09:47:13.355067    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/files/etc/ssl/certs/16002.pem --> /usr/share/ca-certificates/16002.pem (1708 bytes)
	I0814 09:47:13.362761    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 09:47:13.370424    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/1600.pem --> /usr/share/ca-certificates/1600.pem (1338 bytes)
	I0814 09:47:13.378908    4019 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 09:47:13.384822    4019 ssh_runner.go:195] Run: openssl version
	I0814 09:47:13.387292    4019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16002.pem && ln -fs /usr/share/ca-certificates/16002.pem /etc/ssl/certs/16002.pem"
	I0814 09:47:13.390733    4019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16002.pem
	I0814 09:47:13.392429    4019 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 14 16:16 /usr/share/ca-certificates/16002.pem
	I0814 09:47:13.392469    4019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16002.pem
	I0814 09:47:13.394559    4019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16002.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 09:47:13.398220    4019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 09:47:13.402263    4019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:47:13.406706    4019 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 14 16:10 /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:47:13.406849    4019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:47:13.409980    4019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 09:47:13.414096    4019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1600.pem && ln -fs /usr/share/ca-certificates/1600.pem /etc/ssl/certs/1600.pem"
	I0814 09:47:13.417934    4019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1600.pem
	I0814 09:47:13.419806    4019 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 14 16:16 /usr/share/ca-certificates/1600.pem
	I0814 09:47:13.419849    4019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1600.pem
	I0814 09:47:13.421859    4019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1600.pem /etc/ssl/certs/51391683.0"
	I0814 09:47:13.425665    4019 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 09:47:13.427523    4019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 09:47:13.429793    4019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 09:47:13.432297    4019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 09:47:13.434584    4019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 09:47:13.436781    4019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 09:47:13.439160    4019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 09:47:13.441402    4019 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-996000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50269 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-996000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0814 09:47:13.441492    4019 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0814 09:47:13.453961    4019 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 09:47:13.457759    4019 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 09:47:13.457767    4019 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 09:47:13.457810    4019 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 09:47:13.462155    4019 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:47:13.462426    4019 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-996000" does not appear in /Users/jenkins/minikube-integration/19446-1067/kubeconfig
	I0814 09:47:13.462477    4019 kubeconfig.go:62] /Users/jenkins/minikube-integration/19446-1067/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-996000" cluster setting kubeconfig missing "stopped-upgrade-996000" context setting]
	I0814 09:47:13.462609    4019 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19446-1067/kubeconfig: {Name:mkd5271b15535f495ab8e34d870e7dbcadc9c40a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:47:13.463022    4019 kapi.go:59] client config for stopped-upgrade-996000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/stopped-upgrade-996000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/stopped-upgrade-996000/client.key", CAFile:"/Users/jenkins/minikube-integration/19446-1067/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102907e30), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0814 09:47:13.463361    4019 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 09:47:13.466477    4019 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-996000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0814 09:47:13.466484    4019 kubeadm.go:1160] stopping kube-system containers ...
	I0814 09:47:13.466535    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0814 09:47:13.479471    4019 docker.go:483] Stopping containers: [af86f8f14004 1c40d2ec1695 e325fbc948d1 9575eb1a63d7 7e949e3a70a3 9a7859c188cb eee5979245b1 a41ec406c2ba]
	I0814 09:47:13.479540    4019 ssh_runner.go:195] Run: docker stop af86f8f14004 1c40d2ec1695 e325fbc948d1 9575eb1a63d7 7e949e3a70a3 9a7859c188cb eee5979245b1 a41ec406c2ba
	I0814 09:47:13.490780    4019 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 09:47:13.496975    4019 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 09:47:13.500730    4019 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 09:47:13.500738    4019 kubeadm.go:157] found existing configuration files:
	
	I0814 09:47:13.500778    4019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50269 /etc/kubernetes/admin.conf
	I0814 09:47:13.503940    4019 kubeadm.go:163] "https://control-plane.minikube.internal:50269" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50269 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 09:47:13.503999    4019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 09:47:13.508194    4019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50269 /etc/kubernetes/kubelet.conf
	I0814 09:47:13.513911    4019 kubeadm.go:163] "https://control-plane.minikube.internal:50269" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50269 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 09:47:13.513978    4019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 09:47:13.517470    4019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50269 /etc/kubernetes/controller-manager.conf
	I0814 09:47:13.520788    4019 kubeadm.go:163] "https://control-plane.minikube.internal:50269" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50269 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 09:47:13.520830    4019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 09:47:13.523679    4019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50269 /etc/kubernetes/scheduler.conf
	I0814 09:47:13.526359    4019 kubeadm.go:163] "https://control-plane.minikube.internal:50269" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50269 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 09:47:13.526396    4019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 09:47:13.529614    4019 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 09:47:13.532961    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 09:47:13.558040    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 09:47:13.881507    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 09:47:14.003409    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 09:47:14.035322    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 09:47:14.058412    4019 api_server.go:52] waiting for apiserver process to appear ...
	I0814 09:47:14.058477    4019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:47:11.310892    4033 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0814 09:47:11.311010    4033 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0814 09:47:11.312373    4033 kubeadm.go:883] updating cluster {Name:running-upgrade-579000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50343 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-579000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0814 09:47:11.312420    4033 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0814 09:47:11.312460    4033 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0814 09:47:11.323330    4033 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0814 09:47:11.323338    4033 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0814 09:47:11.323384    4033 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0814 09:47:11.326578    4033 ssh_runner.go:195] Run: which lz4
	I0814 09:47:11.327920    4033 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0814 09:47:11.329237    4033 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 09:47:11.329249    4033 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0814 09:47:12.357140    4033 docker.go:649] duration metric: took 1.029334042s to copy over tarball
	I0814 09:47:12.357202    4033 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 09:47:13.525716    4033 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.168603083s)
	I0814 09:47:13.525728    4033 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 09:47:13.543116    4033 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0814 09:47:13.546400    4033 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0814 09:47:13.551743    4033 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 09:47:13.633440    4033 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0814 09:47:14.052311    4033 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0814 09:47:14.073166    4033 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0814 09:47:14.073177    4033 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0814 09:47:14.073182    4033 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0814 09:47:14.077397    4033 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 09:47:14.079097    4033 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0814 09:47:14.081319    4033 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0814 09:47:14.081468    4033 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 09:47:14.084646    4033 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0814 09:47:14.084733    4033 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0814 09:47:14.086780    4033 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0814 09:47:14.086931    4033 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0814 09:47:14.088561    4033 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0814 09:47:14.088627    4033 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0814 09:47:14.090454    4033 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0814 09:47:14.090514    4033 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0814 09:47:14.092130    4033 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0814 09:47:14.092202    4033 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0814 09:47:14.093433    4033 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0814 09:47:14.094564    4033 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0814 09:47:14.518111    4033 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0814 09:47:14.529259    4033 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0814 09:47:14.529299    4033 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0814 09:47:14.529354    4033 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0814 09:47:14.546499    4033 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0814 09:47:14.548098    4033 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0814 09:47:14.551590    4033 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0814 09:47:14.564307    4033 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0814 09:47:14.574437    4033 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0814 09:47:14.577997    4033 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0814 09:47:14.578031    4033 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0814 09:47:14.578143    4033 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	W0814 09:47:14.582969    4033 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0814 09:47:14.583088    4033 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0814 09:47:14.583177    4033 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0814 09:47:14.583193    4033 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0814 09:47:14.583215    4033 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0814 09:47:14.587989    4033 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0814 09:47:14.588013    4033 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0814 09:47:14.588073    4033 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0814 09:47:14.608815    4033 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0814 09:47:14.621612    4033 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0814 09:47:14.621636    4033 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0814 09:47:14.621697    4033 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0814 09:47:14.627347    4033 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0814 09:47:14.627364    4033 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0814 09:47:14.627369    4033 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0814 09:47:14.627480    4033 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0814 09:47:14.629429    4033 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0814 09:47:14.631428    4033 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0814 09:47:14.639142    4033 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0814 09:47:14.639165    4033 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0814 09:47:14.639214    4033 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0814 09:47:14.648419    4033 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0814 09:47:14.648553    4033 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0814 09:47:14.658358    4033 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0814 09:47:14.658453    4033 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0814 09:47:14.666841    4033 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0814 09:47:14.666895    4033 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0814 09:47:14.666905    4033 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0814 09:47:14.666921    4033 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0814 09:47:14.666950    4033 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0814 09:47:14.666962    4033 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0814 09:47:14.670135    4033 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0814 09:47:14.670200    4033 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0814 09:47:14.712002    4033 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0814 09:47:14.712018    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0814 09:47:14.790234    4033 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0814 09:47:14.798874    4033 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0814 09:47:14.798899    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	W0814 09:47:14.804987    4033 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0814 09:47:14.805164    4033 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 09:47:14.926901    4033 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0814 09:47:14.926923    4033 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 09:47:14.926984    4033 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 09:47:14.927643    4033 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0814 09:47:15.075764    4033 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0814 09:47:15.075783    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0814 09:47:15.243711    4033 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0814 09:47:15.243751    4033 cache_images.go:92] duration metric: took 1.170656125s to LoadCachedImages
	W0814 09:47:15.243796    4033 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0814 09:47:15.243801    4033 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0814 09:47:15.243860    4033 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-579000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-579000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 09:47:15.243922    4033 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0814 09:47:15.259540    4033 cni.go:84] Creating CNI manager for ""
	I0814 09:47:15.259554    4033 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0814 09:47:15.259559    4033 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 09:47:15.259567    4033 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-579000 NodeName:running-upgrade-579000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 09:47:15.259648    4033 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-579000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 09:47:15.259708    4033 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0814 09:47:15.263860    4033 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 09:47:15.263892    4033 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 09:47:15.267335    4033 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0814 09:47:15.273031    4033 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 09:47:15.278174    4033 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0814 09:47:15.283941    4033 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0814 09:47:15.285620    4033 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 09:47:15.374801    4033 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 09:47:15.380993    4033 certs.go:68] Setting up /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/running-upgrade-579000 for IP: 10.0.2.15
	I0814 09:47:15.381003    4033 certs.go:194] generating shared ca certs ...
	I0814 09:47:15.381011    4033 certs.go:226] acquiring lock for ca certs: {Name:mk41737d7568a132ec38012a87fa9d3345f331c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:47:15.381150    4033 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19446-1067/.minikube/ca.key
	I0814 09:47:15.381186    4033 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19446-1067/.minikube/proxy-client-ca.key
	I0814 09:47:15.381191    4033 certs.go:256] generating profile certs ...
	I0814 09:47:15.381266    4033 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/running-upgrade-579000/client.key
	I0814 09:47:15.381285    4033 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/running-upgrade-579000/apiserver.key.bba820ac
	I0814 09:47:15.381297    4033 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/running-upgrade-579000/apiserver.crt.bba820ac with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0814 09:47:15.460837    4033 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/running-upgrade-579000/apiserver.crt.bba820ac ...
	I0814 09:47:15.460848    4033 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/running-upgrade-579000/apiserver.crt.bba820ac: {Name:mk32cac80ede8d2dd9c479d6a88b6194cbfdf702 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:47:15.461434    4033 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/running-upgrade-579000/apiserver.key.bba820ac ...
	I0814 09:47:15.461440    4033 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/running-upgrade-579000/apiserver.key.bba820ac: {Name:mk27ad572bf7f7f21f28ce0746eb0bf92af71656 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:47:15.462652    4033 certs.go:381] copying /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/running-upgrade-579000/apiserver.crt.bba820ac -> /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/running-upgrade-579000/apiserver.crt
	I0814 09:47:15.462795    4033 certs.go:385] copying /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/running-upgrade-579000/apiserver.key.bba820ac -> /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/running-upgrade-579000/apiserver.key
	I0814 09:47:15.462957    4033 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/running-upgrade-579000/proxy-client.key
	I0814 09:47:15.463084    4033 certs.go:484] found cert: /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/1600.pem (1338 bytes)
	W0814 09:47:15.463108    4033 certs.go:480] ignoring /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/1600_empty.pem, impossibly tiny 0 bytes
	I0814 09:47:15.463113    4033 certs.go:484] found cert: /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca-key.pem (1675 bytes)
	I0814 09:47:15.463134    4033 certs.go:484] found cert: /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem (1082 bytes)
	I0814 09:47:15.463154    4033 certs.go:484] found cert: /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/cert.pem (1123 bytes)
	I0814 09:47:15.463179    4033 certs.go:484] found cert: /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/key.pem (1675 bytes)
	I0814 09:47:15.463216    4033 certs.go:484] found cert: /Users/jenkins/minikube-integration/19446-1067/.minikube/files/etc/ssl/certs/16002.pem (1708 bytes)
	I0814 09:47:15.463551    4033 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 09:47:15.472069    4033 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 09:47:15.479465    4033 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 09:47:15.486890    4033 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0814 09:47:15.494549    4033 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/running-upgrade-579000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0814 09:47:15.501859    4033 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/running-upgrade-579000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 09:47:15.508933    4033 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/running-upgrade-579000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 09:47:15.516130    4033 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/running-upgrade-579000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 09:47:15.523625    4033 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 09:47:15.531124    4033 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/1600.pem --> /usr/share/ca-certificates/1600.pem (1338 bytes)
	I0814 09:47:15.538641    4033 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/files/etc/ssl/certs/16002.pem --> /usr/share/ca-certificates/16002.pem (1708 bytes)
	I0814 09:47:15.546100    4033 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 09:47:15.551434    4033 ssh_runner.go:195] Run: openssl version
	I0814 09:47:15.553496    4033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16002.pem && ln -fs /usr/share/ca-certificates/16002.pem /etc/ssl/certs/16002.pem"
	I0814 09:47:15.556634    4033 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16002.pem
	I0814 09:47:15.558255    4033 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 14 16:16 /usr/share/ca-certificates/16002.pem
	I0814 09:47:15.558277    4033 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16002.pem
	I0814 09:47:15.560415    4033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16002.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 09:47:15.563950    4033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 09:47:15.567699    4033 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:47:15.569715    4033 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 14 16:10 /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:47:15.569740    4033 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:47:15.571685    4033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 09:47:15.574985    4033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1600.pem && ln -fs /usr/share/ca-certificates/1600.pem /etc/ssl/certs/1600.pem"
	I0814 09:47:15.578466    4033 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1600.pem
	I0814 09:47:15.580017    4033 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 14 16:16 /usr/share/ca-certificates/1600.pem
	I0814 09:47:15.580045    4033 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1600.pem
	I0814 09:47:15.581983    4033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1600.pem /etc/ssl/certs/51391683.0"
	I0814 09:47:15.584918    4033 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 09:47:15.586658    4033 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 09:47:15.589018    4033 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 09:47:15.591187    4033 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 09:47:15.593313    4033 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 09:47:15.595632    4033 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 09:47:15.597735    4033 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 09:47:15.599677    4033 kubeadm.go:392] StartCluster: {Name:running-upgrade-579000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50343 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-579000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0814 09:47:15.599748    4033 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0814 09:47:15.611506    4033 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 09:47:15.616405    4033 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 09:47:15.616413    4033 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 09:47:15.616445    4033 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 09:47:15.620164    4033 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:47:15.620451    4033 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-579000" does not appear in /Users/jenkins/minikube-integration/19446-1067/kubeconfig
	I0814 09:47:15.620550    4033 kubeconfig.go:62] /Users/jenkins/minikube-integration/19446-1067/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-579000" cluster setting kubeconfig missing "running-upgrade-579000" context setting]
	I0814 09:47:15.620753    4033 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19446-1067/kubeconfig: {Name:mkd5271b15535f495ab8e34d870e7dbcadc9c40a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:47:15.621240    4033 kapi.go:59] client config for running-upgrade-579000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/running-upgrade-579000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/running-upgrade-579000/client.key", CAFile:"/Users/jenkins/minikube-integration/19446-1067/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10605fe30), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0814 09:47:15.621573    4033 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 09:47:15.624541    4033 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-579000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0814 09:47:15.624547    4033 kubeadm.go:1160] stopping kube-system containers ...
	I0814 09:47:15.624600    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0814 09:47:15.637073    4033 docker.go:483] Stopping containers: [5f30d30839a6 4f54c1f789c0 7a70521ea1ee e7b35ee6cc2a 94d90481822b e95e6926ff67 f85fd17e0cb2 053e1a0d063c a455c7b28a0f f1fd8f95e57d 8cc17453c508 1d1ddf9610ca 03512631e6e0 9f9159bc24e9 a62bb551afa1 6ea1dddbda9a]
	I0814 09:47:15.637135    4033 ssh_runner.go:195] Run: docker stop 5f30d30839a6 4f54c1f789c0 7a70521ea1ee e7b35ee6cc2a 94d90481822b e95e6926ff67 f85fd17e0cb2 053e1a0d063c a455c7b28a0f f1fd8f95e57d 8cc17453c508 1d1ddf9610ca 03512631e6e0 9f9159bc24e9 a62bb551afa1 6ea1dddbda9a
	I0814 09:47:15.648553    4033 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 09:47:14.560517    4019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:47:15.058637    4019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:47:15.063884    4019 api_server.go:72] duration metric: took 1.005552958s to wait for apiserver process to appear ...
	I0814 09:47:15.063895    4019 api_server.go:88] waiting for apiserver healthz status ...
	I0814 09:47:15.063907    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:47:15.730598    4033 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 09:47:15.734458    4033 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Aug 14 16:46 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Aug 14 16:46 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Aug 14 16:46 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Aug 14 16:46 /etc/kubernetes/scheduler.conf
	
	I0814 09:47:15.734486    4033 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50343 /etc/kubernetes/admin.conf
	I0814 09:47:15.737298    4033 kubeadm.go:163] "https://control-plane.minikube.internal:50343" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50343 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:47:15.737322    4033 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 09:47:15.740152    4033 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50343 /etc/kubernetes/kubelet.conf
	I0814 09:47:15.743195    4033 kubeadm.go:163] "https://control-plane.minikube.internal:50343" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50343 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:47:15.743221    4033 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 09:47:15.746548    4033 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50343 /etc/kubernetes/controller-manager.conf
	I0814 09:47:15.749754    4033 kubeadm.go:163] "https://control-plane.minikube.internal:50343" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50343 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:47:15.749792    4033 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 09:47:15.752702    4033 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50343 /etc/kubernetes/scheduler.conf
	I0814 09:47:15.755458    4033 kubeadm.go:163] "https://control-plane.minikube.internal:50343" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50343 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:47:15.755480    4033 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 09:47:15.758754    4033 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 09:47:15.762165    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 09:47:15.794353    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 09:47:16.206104    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 09:47:16.407624    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 09:47:16.428634    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 09:47:16.453004    4033 api_server.go:52] waiting for apiserver process to appear ...
	I0814 09:47:16.453084    4033 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:47:16.955214    4033 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:47:17.455044    4033 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:47:17.955082    4033 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:47:17.960412    4033 api_server.go:72] duration metric: took 1.50751875s to wait for apiserver process to appear ...
	I0814 09:47:17.960424    4033 api_server.go:88] waiting for apiserver healthz status ...
	I0814 09:47:17.960434    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:47:20.065617    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:47:20.065640    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:47:22.962164    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:47:22.962196    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:47:25.065541    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:47:25.065576    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:47:27.962080    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:47:27.962125    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:47:30.065578    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:47:30.065630    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:47:32.962103    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:47:32.962129    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:47:35.066007    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:47:35.066055    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:47:37.962283    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:47:37.962359    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:47:40.066567    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:47:40.066645    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:47:42.963238    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:47:42.963288    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:47:45.068050    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:47:45.068123    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:47:47.964023    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:47:47.964089    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:47:50.069584    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:47:50.069675    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:47:52.965213    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:47:52.965282    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:47:55.071192    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:47:55.071277    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:47:57.966913    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:47:57.966992    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:48:00.072360    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:48:00.072409    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:48:02.968999    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:48:02.969046    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:48:05.074541    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:48:05.074595    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:48:07.969904    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:48:07.969992    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:48:10.076279    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:48:10.076365    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:48:12.970998    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:48:12.971045    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:48:15.077490    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:48:15.077656    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:48:15.090882    4019 logs.go:276] 2 containers: [a0e97d1f2e98 7e949e3a70a3]
	I0814 09:48:15.090957    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:48:15.102370    4019 logs.go:276] 2 containers: [53f6a02861b1 1c40d2ec1695]
	I0814 09:48:15.102440    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:48:15.113408    4019 logs.go:276] 1 containers: [c1a47e5fa9ce]
	I0814 09:48:15.113471    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:48:15.123832    4019 logs.go:276] 2 containers: [e69eb145d498 9575eb1a63d7]
	I0814 09:48:15.123903    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:48:15.134719    4019 logs.go:276] 1 containers: [4289049e4ff9]
	I0814 09:48:15.134783    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:48:15.145622    4019 logs.go:276] 2 containers: [4731b37cec82 af86f8f14004]
	I0814 09:48:15.145684    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:48:15.155639    4019 logs.go:276] 0 containers: []
	W0814 09:48:15.155652    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:48:15.155709    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:48:15.167116    4019 logs.go:276] 2 containers: [d58abbf5afb4 3e7bc04c8fa4]
	I0814 09:48:15.167139    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:48:15.167144    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:48:15.206654    4019 logs.go:123] Gathering logs for etcd [1c40d2ec1695] ...
	I0814 09:48:15.206689    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c40d2ec1695"
	I0814 09:48:15.222078    4019 logs.go:123] Gathering logs for storage-provisioner [3e7bc04c8fa4] ...
	I0814 09:48:15.222094    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e7bc04c8fa4"
	I0814 09:48:15.233164    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:48:15.233175    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:48:15.258673    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:48:15.258681    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:48:15.263078    4019 logs.go:123] Gathering logs for kube-proxy [4289049e4ff9] ...
	I0814 09:48:15.263087    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4289049e4ff9"
	I0814 09:48:15.274765    4019 logs.go:123] Gathering logs for kube-controller-manager [4731b37cec82] ...
	I0814 09:48:15.274776    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4731b37cec82"
	I0814 09:48:15.292122    4019 logs.go:123] Gathering logs for kube-controller-manager [af86f8f14004] ...
	I0814 09:48:15.292135    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af86f8f14004"
	I0814 09:48:15.307212    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:48:15.307226    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:48:15.390038    4019 logs.go:123] Gathering logs for kube-scheduler [e69eb145d498] ...
	I0814 09:48:15.390051    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e69eb145d498"
	I0814 09:48:15.402137    4019 logs.go:123] Gathering logs for storage-provisioner [d58abbf5afb4] ...
	I0814 09:48:15.402147    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58abbf5afb4"
	I0814 09:48:15.418769    4019 logs.go:123] Gathering logs for kube-scheduler [9575eb1a63d7] ...
	I0814 09:48:15.418779    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9575eb1a63d7"
	I0814 09:48:15.434780    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:48:15.434789    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:48:15.450175    4019 logs.go:123] Gathering logs for kube-apiserver [a0e97d1f2e98] ...
	I0814 09:48:15.450188    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e97d1f2e98"
	I0814 09:48:15.464173    4019 logs.go:123] Gathering logs for kube-apiserver [7e949e3a70a3] ...
	I0814 09:48:15.464182    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e949e3a70a3"
	I0814 09:48:15.491160    4019 logs.go:123] Gathering logs for etcd [53f6a02861b1] ...
	I0814 09:48:15.491169    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53f6a02861b1"
	I0814 09:48:15.504717    4019 logs.go:123] Gathering logs for coredns [c1a47e5fa9ce] ...
	I0814 09:48:15.504732    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a47e5fa9ce"
	I0814 09:48:18.017697    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:48:17.973007    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:48:17.973422    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:48:18.013557    4033 logs.go:276] 2 containers: [351b37282b78 8cc17453c508]
	I0814 09:48:18.013704    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:48:18.034987    4033 logs.go:276] 2 containers: [9f25f6d89e86 03512631e6e0]
	I0814 09:48:18.035087    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:48:18.050832    4033 logs.go:276] 1 containers: [b4e8c942b977]
	I0814 09:48:18.050911    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:48:18.063932    4033 logs.go:276] 2 containers: [c4ee65912640 f85fd17e0cb2]
	I0814 09:48:18.063998    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:48:18.075357    4033 logs.go:276] 1 containers: [451ede5263eb]
	I0814 09:48:18.075444    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:48:18.086608    4033 logs.go:276] 2 containers: [2b8801fcf8bf 053e1a0d063c]
	I0814 09:48:18.086675    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:48:18.097490    4033 logs.go:276] 0 containers: []
	W0814 09:48:18.097500    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:48:18.097562    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:48:18.109155    4033 logs.go:276] 2 containers: [262a2b312a14 7c16146209d8]
	I0814 09:48:18.109180    4033 logs.go:123] Gathering logs for kube-scheduler [c4ee65912640] ...
	I0814 09:48:18.109187    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4ee65912640"
	I0814 09:48:18.121308    4033 logs.go:123] Gathering logs for kube-controller-manager [2b8801fcf8bf] ...
	I0814 09:48:18.121320    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b8801fcf8bf"
	I0814 09:48:18.141028    4033 logs.go:123] Gathering logs for storage-provisioner [262a2b312a14] ...
	I0814 09:48:18.141038    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262a2b312a14"
	I0814 09:48:18.152791    4033 logs.go:123] Gathering logs for storage-provisioner [7c16146209d8] ...
	I0814 09:48:18.152802    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c16146209d8"
	I0814 09:48:18.164533    4033 logs.go:123] Gathering logs for kube-apiserver [351b37282b78] ...
	I0814 09:48:18.164544    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351b37282b78"
	I0814 09:48:18.178346    4033 logs.go:123] Gathering logs for etcd [9f25f6d89e86] ...
	I0814 09:48:18.178355    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f25f6d89e86"
	I0814 09:48:18.192510    4033 logs.go:123] Gathering logs for coredns [b4e8c942b977] ...
	I0814 09:48:18.192519    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8c942b977"
	I0814 09:48:18.213692    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:48:18.213702    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:48:18.281785    4033 logs.go:123] Gathering logs for etcd [03512631e6e0] ...
	I0814 09:48:18.281801    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03512631e6e0"
	I0814 09:48:18.297021    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:48:18.297035    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:48:18.334883    4033 logs.go:123] Gathering logs for kube-proxy [451ede5263eb] ...
	I0814 09:48:18.334891    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451ede5263eb"
	I0814 09:48:18.346723    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:48:18.346736    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:48:18.359382    4033 logs.go:123] Gathering logs for kube-controller-manager [053e1a0d063c] ...
	I0814 09:48:18.359398    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053e1a0d063c"
	I0814 09:48:18.371864    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:48:18.371874    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:48:18.398924    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:48:18.398933    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:48:18.403463    4033 logs.go:123] Gathering logs for kube-apiserver [8cc17453c508] ...
	I0814 09:48:18.403471    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc17453c508"
	I0814 09:48:18.429089    4033 logs.go:123] Gathering logs for kube-scheduler [f85fd17e0cb2] ...
	I0814 09:48:18.429101    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f85fd17e0cb2"
	I0814 09:48:23.019887    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:48:23.020243    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:48:23.055829    4019 logs.go:276] 2 containers: [a0e97d1f2e98 7e949e3a70a3]
	I0814 09:48:23.055968    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:48:23.076652    4019 logs.go:276] 2 containers: [53f6a02861b1 1c40d2ec1695]
	I0814 09:48:23.076749    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:48:23.090792    4019 logs.go:276] 1 containers: [c1a47e5fa9ce]
	I0814 09:48:23.090873    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:48:23.103336    4019 logs.go:276] 2 containers: [e69eb145d498 9575eb1a63d7]
	I0814 09:48:23.103407    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:48:23.114812    4019 logs.go:276] 1 containers: [4289049e4ff9]
	I0814 09:48:23.114876    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:48:23.125843    4019 logs.go:276] 2 containers: [4731b37cec82 af86f8f14004]
	I0814 09:48:23.125920    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:48:23.136759    4019 logs.go:276] 0 containers: []
	W0814 09:48:23.136772    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:48:23.136835    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:48:23.148984    4019 logs.go:276] 2 containers: [d58abbf5afb4 3e7bc04c8fa4]
	I0814 09:48:23.149001    4019 logs.go:123] Gathering logs for kube-controller-manager [4731b37cec82] ...
	I0814 09:48:23.149006    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4731b37cec82"
	I0814 09:48:23.167121    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:48:23.167131    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:48:23.194279    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:48:23.194287    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:48:23.198543    4019 logs.go:123] Gathering logs for etcd [1c40d2ec1695] ...
	I0814 09:48:23.198550    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c40d2ec1695"
	I0814 09:48:23.213829    4019 logs.go:123] Gathering logs for kube-scheduler [9575eb1a63d7] ...
	I0814 09:48:23.213839    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9575eb1a63d7"
	I0814 09:48:23.229053    4019 logs.go:123] Gathering logs for kube-proxy [4289049e4ff9] ...
	I0814 09:48:23.229064    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4289049e4ff9"
	I0814 09:48:23.240994    4019 logs.go:123] Gathering logs for storage-provisioner [d58abbf5afb4] ...
	I0814 09:48:23.241009    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58abbf5afb4"
	I0814 09:48:23.252319    4019 logs.go:123] Gathering logs for storage-provisioner [3e7bc04c8fa4] ...
	I0814 09:48:23.252330    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e7bc04c8fa4"
	I0814 09:48:23.265970    4019 logs.go:123] Gathering logs for kube-controller-manager [af86f8f14004] ...
	I0814 09:48:23.265982    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af86f8f14004"
	I0814 09:48:23.280872    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:48:23.280885    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:48:23.292833    4019 logs.go:123] Gathering logs for kube-apiserver [a0e97d1f2e98] ...
	I0814 09:48:23.292844    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e97d1f2e98"
	I0814 09:48:23.306713    4019 logs.go:123] Gathering logs for kube-apiserver [7e949e3a70a3] ...
	I0814 09:48:23.306724    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e949e3a70a3"
	I0814 09:48:23.331343    4019 logs.go:123] Gathering logs for etcd [53f6a02861b1] ...
	I0814 09:48:23.331354    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53f6a02861b1"
	I0814 09:48:23.345453    4019 logs.go:123] Gathering logs for kube-scheduler [e69eb145d498] ...
	I0814 09:48:23.345467    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e69eb145d498"
	I0814 09:48:23.357724    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:48:23.357735    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:48:23.393988    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:48:23.393996    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:48:23.433984    4019 logs.go:123] Gathering logs for coredns [c1a47e5fa9ce] ...
	I0814 09:48:23.433998    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a47e5fa9ce"
	I0814 09:48:20.953641    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:48:25.947552    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:48:25.956156    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:48:25.956567    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:48:25.992725    4033 logs.go:276] 2 containers: [351b37282b78 8cc17453c508]
	I0814 09:48:25.992859    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:48:26.014026    4033 logs.go:276] 2 containers: [9f25f6d89e86 03512631e6e0]
	I0814 09:48:26.014145    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:48:26.028847    4033 logs.go:276] 1 containers: [b4e8c942b977]
	I0814 09:48:26.028925    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:48:26.041046    4033 logs.go:276] 2 containers: [c4ee65912640 f85fd17e0cb2]
	I0814 09:48:26.041114    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:48:26.051768    4033 logs.go:276] 1 containers: [451ede5263eb]
	I0814 09:48:26.051844    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:48:26.062617    4033 logs.go:276] 2 containers: [2b8801fcf8bf 053e1a0d063c]
	I0814 09:48:26.062684    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:48:26.073279    4033 logs.go:276] 0 containers: []
	W0814 09:48:26.073288    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:48:26.073341    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:48:26.084032    4033 logs.go:276] 2 containers: [262a2b312a14 7c16146209d8]
	I0814 09:48:26.084050    4033 logs.go:123] Gathering logs for storage-provisioner [7c16146209d8] ...
	I0814 09:48:26.084056    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c16146209d8"
	I0814 09:48:26.095964    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:48:26.095978    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:48:26.123085    4033 logs.go:123] Gathering logs for kube-apiserver [8cc17453c508] ...
	I0814 09:48:26.123095    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc17453c508"
	I0814 09:48:26.149635    4033 logs.go:123] Gathering logs for etcd [9f25f6d89e86] ...
	I0814 09:48:26.149648    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f25f6d89e86"
	I0814 09:48:26.163956    4033 logs.go:123] Gathering logs for kube-controller-manager [2b8801fcf8bf] ...
	I0814 09:48:26.163973    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b8801fcf8bf"
	I0814 09:48:26.181914    4033 logs.go:123] Gathering logs for etcd [03512631e6e0] ...
	I0814 09:48:26.181924    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03512631e6e0"
	I0814 09:48:26.196509    4033 logs.go:123] Gathering logs for kube-scheduler [c4ee65912640] ...
	I0814 09:48:26.196523    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4ee65912640"
	I0814 09:48:26.209711    4033 logs.go:123] Gathering logs for storage-provisioner [262a2b312a14] ...
	I0814 09:48:26.209724    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262a2b312a14"
	I0814 09:48:26.221463    4033 logs.go:123] Gathering logs for kube-proxy [451ede5263eb] ...
	I0814 09:48:26.221477    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451ede5263eb"
	I0814 09:48:26.232983    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:48:26.232996    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:48:26.269193    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:48:26.269199    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:48:26.306796    4033 logs.go:123] Gathering logs for kube-scheduler [f85fd17e0cb2] ...
	I0814 09:48:26.306810    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f85fd17e0cb2"
	I0814 09:48:26.321542    4033 logs.go:123] Gathering logs for kube-controller-manager [053e1a0d063c] ...
	I0814 09:48:26.321552    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053e1a0d063c"
	I0814 09:48:26.332557    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:48:26.332569    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:48:26.344360    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:48:26.344371    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:48:26.348650    4033 logs.go:123] Gathering logs for kube-apiserver [351b37282b78] ...
	I0814 09:48:26.348657    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351b37282b78"
	I0814 09:48:26.362801    4033 logs.go:123] Gathering logs for coredns [b4e8c942b977] ...
	I0814 09:48:26.362812    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8c942b977"
	I0814 09:48:28.876059    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:48:30.950445    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:48:30.950864    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:48:30.990794    4019 logs.go:276] 2 containers: [a0e97d1f2e98 7e949e3a70a3]
	I0814 09:48:30.990944    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:48:31.009846    4019 logs.go:276] 2 containers: [53f6a02861b1 1c40d2ec1695]
	I0814 09:48:31.009943    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:48:31.024449    4019 logs.go:276] 1 containers: [c1a47e5fa9ce]
	I0814 09:48:31.024526    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:48:31.036087    4019 logs.go:276] 2 containers: [e69eb145d498 9575eb1a63d7]
	I0814 09:48:31.036159    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:48:31.046524    4019 logs.go:276] 1 containers: [4289049e4ff9]
	I0814 09:48:31.046601    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:48:31.057079    4019 logs.go:276] 2 containers: [4731b37cec82 af86f8f14004]
	I0814 09:48:31.057146    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:48:31.067325    4019 logs.go:276] 0 containers: []
	W0814 09:48:31.067335    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:48:31.067388    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:48:31.077620    4019 logs.go:276] 2 containers: [d58abbf5afb4 3e7bc04c8fa4]
	I0814 09:48:31.077636    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:48:31.077643    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:48:31.114591    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:48:31.114602    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:48:31.149659    4019 logs.go:123] Gathering logs for kube-apiserver [a0e97d1f2e98] ...
	I0814 09:48:31.149671    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e97d1f2e98"
	I0814 09:48:31.171431    4019 logs.go:123] Gathering logs for etcd [1c40d2ec1695] ...
	I0814 09:48:31.171442    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c40d2ec1695"
	I0814 09:48:31.194914    4019 logs.go:123] Gathering logs for kube-scheduler [9575eb1a63d7] ...
	I0814 09:48:31.194924    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9575eb1a63d7"
	I0814 09:48:31.210087    4019 logs.go:123] Gathering logs for storage-provisioner [d58abbf5afb4] ...
	I0814 09:48:31.210097    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58abbf5afb4"
	I0814 09:48:31.222113    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:48:31.222124    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:48:31.233784    4019 logs.go:123] Gathering logs for etcd [53f6a02861b1] ...
	I0814 09:48:31.233797    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53f6a02861b1"
	I0814 09:48:31.248110    4019 logs.go:123] Gathering logs for kube-proxy [4289049e4ff9] ...
	I0814 09:48:31.248121    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4289049e4ff9"
	I0814 09:48:31.259918    4019 logs.go:123] Gathering logs for kube-controller-manager [4731b37cec82] ...
	I0814 09:48:31.259927    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4731b37cec82"
	I0814 09:48:31.278001    4019 logs.go:123] Gathering logs for kube-controller-manager [af86f8f14004] ...
	I0814 09:48:31.278012    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af86f8f14004"
	I0814 09:48:31.291978    4019 logs.go:123] Gathering logs for storage-provisioner [3e7bc04c8fa4] ...
	I0814 09:48:31.291988    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e7bc04c8fa4"
	I0814 09:48:31.303129    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:48:31.303139    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:48:31.327261    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:48:31.327272    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:48:31.331517    4019 logs.go:123] Gathering logs for kube-apiserver [7e949e3a70a3] ...
	I0814 09:48:31.331526    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e949e3a70a3"
	I0814 09:48:31.358052    4019 logs.go:123] Gathering logs for coredns [c1a47e5fa9ce] ...
	I0814 09:48:31.358063    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a47e5fa9ce"
	I0814 09:48:31.369581    4019 logs.go:123] Gathering logs for kube-scheduler [e69eb145d498] ...
	I0814 09:48:31.369593    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e69eb145d498"
	I0814 09:48:33.884048    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:48:33.878338    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:48:33.878705    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:48:33.909755    4033 logs.go:276] 2 containers: [351b37282b78 8cc17453c508]
	I0814 09:48:33.909900    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:48:33.931030    4033 logs.go:276] 2 containers: [9f25f6d89e86 03512631e6e0]
	I0814 09:48:33.931136    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:48:33.945046    4033 logs.go:276] 1 containers: [b4e8c942b977]
	I0814 09:48:33.945131    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:48:33.956547    4033 logs.go:276] 2 containers: [c4ee65912640 f85fd17e0cb2]
	I0814 09:48:33.956617    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:48:33.968187    4033 logs.go:276] 1 containers: [451ede5263eb]
	I0814 09:48:33.968252    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:48:33.979336    4033 logs.go:276] 2 containers: [2b8801fcf8bf 053e1a0d063c]
	I0814 09:48:33.979402    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:48:33.990168    4033 logs.go:276] 0 containers: []
	W0814 09:48:33.990180    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:48:33.990230    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:48:34.000537    4033 logs.go:276] 2 containers: [262a2b312a14 7c16146209d8]
	I0814 09:48:34.000553    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:48:34.000558    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:48:34.043354    4033 logs.go:123] Gathering logs for etcd [03512631e6e0] ...
	I0814 09:48:34.043367    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03512631e6e0"
	I0814 09:48:34.060110    4033 logs.go:123] Gathering logs for kube-scheduler [c4ee65912640] ...
	I0814 09:48:34.060120    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4ee65912640"
	I0814 09:48:34.072286    4033 logs.go:123] Gathering logs for kube-controller-manager [2b8801fcf8bf] ...
	I0814 09:48:34.072296    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b8801fcf8bf"
	I0814 09:48:34.089903    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:48:34.089912    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:48:34.102786    4033 logs.go:123] Gathering logs for kube-apiserver [351b37282b78] ...
	I0814 09:48:34.102796    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351b37282b78"
	I0814 09:48:34.117783    4033 logs.go:123] Gathering logs for kube-apiserver [8cc17453c508] ...
	I0814 09:48:34.117795    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc17453c508"
	I0814 09:48:34.144057    4033 logs.go:123] Gathering logs for etcd [9f25f6d89e86] ...
	I0814 09:48:34.144067    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f25f6d89e86"
	I0814 09:48:34.158500    4033 logs.go:123] Gathering logs for coredns [b4e8c942b977] ...
	I0814 09:48:34.158510    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8c942b977"
	I0814 09:48:34.170589    4033 logs.go:123] Gathering logs for storage-provisioner [7c16146209d8] ...
	I0814 09:48:34.170602    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c16146209d8"
	I0814 09:48:34.182614    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:48:34.182626    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:48:34.209924    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:48:34.209939    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:48:34.215102    4033 logs.go:123] Gathering logs for kube-proxy [451ede5263eb] ...
	I0814 09:48:34.215111    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451ede5263eb"
	I0814 09:48:34.227321    4033 logs.go:123] Gathering logs for storage-provisioner [262a2b312a14] ...
	I0814 09:48:34.227332    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262a2b312a14"
	I0814 09:48:34.238987    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:48:34.238997    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:48:34.276287    4033 logs.go:123] Gathering logs for kube-scheduler [f85fd17e0cb2] ...
	I0814 09:48:34.276297    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f85fd17e0cb2"
	I0814 09:48:34.290004    4033 logs.go:123] Gathering logs for kube-controller-manager [053e1a0d063c] ...
	I0814 09:48:34.290017    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053e1a0d063c"
	I0814 09:48:38.886319    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:48:38.886516    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:48:38.905506    4019 logs.go:276] 2 containers: [a0e97d1f2e98 7e949e3a70a3]
	I0814 09:48:38.905601    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:48:38.919367    4019 logs.go:276] 2 containers: [53f6a02861b1 1c40d2ec1695]
	I0814 09:48:38.919454    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:48:38.931198    4019 logs.go:276] 1 containers: [c1a47e5fa9ce]
	I0814 09:48:38.931266    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:48:38.941806    4019 logs.go:276] 2 containers: [e69eb145d498 9575eb1a63d7]
	I0814 09:48:38.941880    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:48:38.952672    4019 logs.go:276] 1 containers: [4289049e4ff9]
	I0814 09:48:38.952744    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:48:38.963815    4019 logs.go:276] 2 containers: [4731b37cec82 af86f8f14004]
	I0814 09:48:38.963885    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:48:38.979802    4019 logs.go:276] 0 containers: []
	W0814 09:48:38.979812    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:48:38.979865    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:48:38.990234    4019 logs.go:276] 2 containers: [d58abbf5afb4 3e7bc04c8fa4]
	I0814 09:48:38.990255    4019 logs.go:123] Gathering logs for storage-provisioner [d58abbf5afb4] ...
	I0814 09:48:38.990260    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58abbf5afb4"
	I0814 09:48:39.004234    4019 logs.go:123] Gathering logs for storage-provisioner [3e7bc04c8fa4] ...
	I0814 09:48:39.004246    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e7bc04c8fa4"
	I0814 09:48:39.016191    4019 logs.go:123] Gathering logs for etcd [53f6a02861b1] ...
	I0814 09:48:39.016205    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53f6a02861b1"
	I0814 09:48:39.030088    4019 logs.go:123] Gathering logs for kube-scheduler [e69eb145d498] ...
	I0814 09:48:39.030097    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e69eb145d498"
	I0814 09:48:39.042310    4019 logs.go:123] Gathering logs for kube-controller-manager [af86f8f14004] ...
	I0814 09:48:39.042328    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af86f8f14004"
	I0814 09:48:39.057029    4019 logs.go:123] Gathering logs for kube-controller-manager [4731b37cec82] ...
	I0814 09:48:39.057038    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4731b37cec82"
	I0814 09:48:39.074208    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:48:39.074218    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:48:39.113369    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:48:39.113388    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:48:39.117985    4019 logs.go:123] Gathering logs for etcd [1c40d2ec1695] ...
	I0814 09:48:39.117999    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c40d2ec1695"
	I0814 09:48:39.133556    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:48:39.133571    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:48:39.146060    4019 logs.go:123] Gathering logs for kube-apiserver [7e949e3a70a3] ...
	I0814 09:48:39.146071    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e949e3a70a3"
	I0814 09:48:39.171743    4019 logs.go:123] Gathering logs for coredns [c1a47e5fa9ce] ...
	I0814 09:48:39.171753    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a47e5fa9ce"
	I0814 09:48:39.185376    4019 logs.go:123] Gathering logs for kube-scheduler [9575eb1a63d7] ...
	I0814 09:48:39.185386    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9575eb1a63d7"
	I0814 09:48:39.208586    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:48:39.208597    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:48:39.232768    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:48:39.232778    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:48:39.267458    4019 logs.go:123] Gathering logs for kube-apiserver [a0e97d1f2e98] ...
	I0814 09:48:39.267472    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e97d1f2e98"
	I0814 09:48:39.282010    4019 logs.go:123] Gathering logs for kube-proxy [4289049e4ff9] ...
	I0814 09:48:39.282022    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4289049e4ff9"
	I0814 09:48:36.802851    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:48:41.796044    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:48:41.804950    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:48:41.805077    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:48:41.824437    4033 logs.go:276] 2 containers: [351b37282b78 8cc17453c508]
	I0814 09:48:41.824541    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:48:41.839045    4033 logs.go:276] 2 containers: [9f25f6d89e86 03512631e6e0]
	I0814 09:48:41.839113    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:48:41.851302    4033 logs.go:276] 1 containers: [b4e8c942b977]
	I0814 09:48:41.851376    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:48:41.862245    4033 logs.go:276] 2 containers: [c4ee65912640 f85fd17e0cb2]
	I0814 09:48:41.862310    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:48:41.872524    4033 logs.go:276] 1 containers: [451ede5263eb]
	I0814 09:48:41.872593    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:48:41.883101    4033 logs.go:276] 2 containers: [2b8801fcf8bf 053e1a0d063c]
	I0814 09:48:41.883164    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:48:41.896692    4033 logs.go:276] 0 containers: []
	W0814 09:48:41.896704    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:48:41.896766    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:48:41.907365    4033 logs.go:276] 2 containers: [262a2b312a14 7c16146209d8]
	I0814 09:48:41.907382    4033 logs.go:123] Gathering logs for kube-apiserver [351b37282b78] ...
	I0814 09:48:41.907387    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351b37282b78"
	I0814 09:48:41.921252    4033 logs.go:123] Gathering logs for etcd [9f25f6d89e86] ...
	I0814 09:48:41.921263    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f25f6d89e86"
	I0814 09:48:41.935181    4033 logs.go:123] Gathering logs for storage-provisioner [7c16146209d8] ...
	I0814 09:48:41.935190    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c16146209d8"
	I0814 09:48:41.946578    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:48:41.946589    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:48:41.982583    4033 logs.go:123] Gathering logs for coredns [b4e8c942b977] ...
	I0814 09:48:41.982596    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8c942b977"
	I0814 09:48:41.997772    4033 logs.go:123] Gathering logs for kube-scheduler [f85fd17e0cb2] ...
	I0814 09:48:41.997784    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f85fd17e0cb2"
	I0814 09:48:42.015951    4033 logs.go:123] Gathering logs for storage-provisioner [262a2b312a14] ...
	I0814 09:48:42.015963    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262a2b312a14"
	I0814 09:48:42.028945    4033 logs.go:123] Gathering logs for kube-apiserver [8cc17453c508] ...
	I0814 09:48:42.028957    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc17453c508"
	I0814 09:48:42.053434    4033 logs.go:123] Gathering logs for kube-scheduler [c4ee65912640] ...
	I0814 09:48:42.053446    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4ee65912640"
	I0814 09:48:42.065138    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:48:42.065148    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:48:42.089782    4033 logs.go:123] Gathering logs for kube-controller-manager [053e1a0d063c] ...
	I0814 09:48:42.089790    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053e1a0d063c"
	I0814 09:48:42.101087    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:48:42.101100    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:48:42.113155    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:48:42.113166    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:48:42.152634    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:48:42.152647    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:48:42.157469    4033 logs.go:123] Gathering logs for etcd [03512631e6e0] ...
	I0814 09:48:42.157477    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03512631e6e0"
	I0814 09:48:42.172522    4033 logs.go:123] Gathering logs for kube-proxy [451ede5263eb] ...
	I0814 09:48:42.172532    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451ede5263eb"
	I0814 09:48:42.189029    4033 logs.go:123] Gathering logs for kube-controller-manager [2b8801fcf8bf] ...
	I0814 09:48:42.189040    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b8801fcf8bf"
	I0814 09:48:44.708265    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:48:46.798221    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:48:46.798472    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:48:46.821604    4019 logs.go:276] 2 containers: [a0e97d1f2e98 7e949e3a70a3]
	I0814 09:48:46.821708    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:48:46.837405    4019 logs.go:276] 2 containers: [53f6a02861b1 1c40d2ec1695]
	I0814 09:48:46.837494    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:48:46.850000    4019 logs.go:276] 1 containers: [c1a47e5fa9ce]
	I0814 09:48:46.850071    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:48:46.861194    4019 logs.go:276] 2 containers: [e69eb145d498 9575eb1a63d7]
	I0814 09:48:46.861270    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:48:46.871487    4019 logs.go:276] 1 containers: [4289049e4ff9]
	I0814 09:48:46.871556    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:48:46.882042    4019 logs.go:276] 2 containers: [4731b37cec82 af86f8f14004]
	I0814 09:48:46.882105    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:48:46.892252    4019 logs.go:276] 0 containers: []
	W0814 09:48:46.892262    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:48:46.892314    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:48:46.902773    4019 logs.go:276] 2 containers: [d58abbf5afb4 3e7bc04c8fa4]
	I0814 09:48:46.902792    4019 logs.go:123] Gathering logs for etcd [53f6a02861b1] ...
	I0814 09:48:46.902797    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53f6a02861b1"
	I0814 09:48:46.921881    4019 logs.go:123] Gathering logs for kube-scheduler [e69eb145d498] ...
	I0814 09:48:46.921892    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e69eb145d498"
	I0814 09:48:46.937959    4019 logs.go:123] Gathering logs for kube-controller-manager [4731b37cec82] ...
	I0814 09:48:46.937969    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4731b37cec82"
	I0814 09:48:46.955093    4019 logs.go:123] Gathering logs for kube-controller-manager [af86f8f14004] ...
	I0814 09:48:46.955103    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af86f8f14004"
	I0814 09:48:46.969427    4019 logs.go:123] Gathering logs for storage-provisioner [d58abbf5afb4] ...
	I0814 09:48:46.969442    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58abbf5afb4"
	I0814 09:48:46.980807    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:48:46.980818    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:48:46.992811    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:48:46.992821    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:48:47.031837    4019 logs.go:123] Gathering logs for kube-apiserver [7e949e3a70a3] ...
	I0814 09:48:47.031846    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e949e3a70a3"
	I0814 09:48:47.059228    4019 logs.go:123] Gathering logs for coredns [c1a47e5fa9ce] ...
	I0814 09:48:47.059239    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a47e5fa9ce"
	I0814 09:48:47.070189    4019 logs.go:123] Gathering logs for storage-provisioner [3e7bc04c8fa4] ...
	I0814 09:48:47.070201    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e7bc04c8fa4"
	I0814 09:48:47.081457    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:48:47.081468    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:48:47.085777    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:48:47.085786    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:48:47.121778    4019 logs.go:123] Gathering logs for kube-scheduler [9575eb1a63d7] ...
	I0814 09:48:47.121790    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9575eb1a63d7"
	I0814 09:48:47.138094    4019 logs.go:123] Gathering logs for kube-proxy [4289049e4ff9] ...
	I0814 09:48:47.138105    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4289049e4ff9"
	I0814 09:48:47.155159    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:48:47.155174    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:48:47.179459    4019 logs.go:123] Gathering logs for kube-apiserver [a0e97d1f2e98] ...
	I0814 09:48:47.179473    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e97d1f2e98"
	I0814 09:48:47.193870    4019 logs.go:123] Gathering logs for etcd [1c40d2ec1695] ...
	I0814 09:48:47.193879    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c40d2ec1695"
	I0814 09:48:49.710425    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:48:49.710668    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:48:49.731690    4033 logs.go:276] 2 containers: [351b37282b78 8cc17453c508]
	I0814 09:48:49.731770    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:48:49.744824    4033 logs.go:276] 2 containers: [9f25f6d89e86 03512631e6e0]
	I0814 09:48:49.744889    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:48:49.755953    4033 logs.go:276] 1 containers: [b4e8c942b977]
	I0814 09:48:49.756025    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:48:49.766256    4033 logs.go:276] 2 containers: [c4ee65912640 f85fd17e0cb2]
	I0814 09:48:49.766330    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:48:49.776785    4033 logs.go:276] 1 containers: [451ede5263eb]
	I0814 09:48:49.776860    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:48:49.787490    4033 logs.go:276] 2 containers: [2b8801fcf8bf 053e1a0d063c]
	I0814 09:48:49.787561    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:48:49.797657    4033 logs.go:276] 0 containers: []
	W0814 09:48:49.797667    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:48:49.797727    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:48:49.808071    4033 logs.go:276] 2 containers: [262a2b312a14 7c16146209d8]
	I0814 09:48:49.808090    4033 logs.go:123] Gathering logs for kube-controller-manager [053e1a0d063c] ...
	I0814 09:48:49.808095    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053e1a0d063c"
	I0814 09:48:49.820590    4033 logs.go:123] Gathering logs for etcd [9f25f6d89e86] ...
	I0814 09:48:49.820602    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f25f6d89e86"
	I0814 09:48:49.835792    4033 logs.go:123] Gathering logs for kube-scheduler [f85fd17e0cb2] ...
	I0814 09:48:49.835805    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f85fd17e0cb2"
	I0814 09:48:49.850267    4033 logs.go:123] Gathering logs for storage-provisioner [262a2b312a14] ...
	I0814 09:48:49.850277    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262a2b312a14"
	I0814 09:48:49.861548    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:48:49.861559    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:48:49.898560    4033 logs.go:123] Gathering logs for kube-controller-manager [2b8801fcf8bf] ...
	I0814 09:48:49.898572    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b8801fcf8bf"
	I0814 09:48:49.915675    4033 logs.go:123] Gathering logs for kube-apiserver [8cc17453c508] ...
	I0814 09:48:49.915685    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc17453c508"
	I0814 09:48:49.940232    4033 logs.go:123] Gathering logs for etcd [03512631e6e0] ...
	I0814 09:48:49.940244    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03512631e6e0"
	I0814 09:48:49.954373    4033 logs.go:123] Gathering logs for kube-scheduler [c4ee65912640] ...
	I0814 09:48:49.954387    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4ee65912640"
	I0814 09:48:49.965813    4033 logs.go:123] Gathering logs for kube-proxy [451ede5263eb] ...
	I0814 09:48:49.965825    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451ede5263eb"
	I0814 09:48:49.978365    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:48:49.978376    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:48:50.015727    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:48:50.015737    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:48:50.020022    4033 logs.go:123] Gathering logs for storage-provisioner [7c16146209d8] ...
	I0814 09:48:50.020029    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c16146209d8"
	I0814 09:48:50.031129    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:48:50.031141    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:48:50.056419    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:48:50.056427    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:48:50.068144    4033 logs.go:123] Gathering logs for kube-apiserver [351b37282b78] ...
	I0814 09:48:50.068158    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351b37282b78"
	I0814 09:48:50.081905    4033 logs.go:123] Gathering logs for coredns [b4e8c942b977] ...
	I0814 09:48:50.081919    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8c942b977"
	I0814 09:48:49.711892    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:48:52.595120    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:48:54.713903    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:48:54.714076    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:48:54.729761    4019 logs.go:276] 2 containers: [a0e97d1f2e98 7e949e3a70a3]
	I0814 09:48:54.729833    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:48:54.740288    4019 logs.go:276] 2 containers: [53f6a02861b1 1c40d2ec1695]
	I0814 09:48:54.740352    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:48:54.750349    4019 logs.go:276] 1 containers: [c1a47e5fa9ce]
	I0814 09:48:54.750421    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:48:54.761345    4019 logs.go:276] 2 containers: [e69eb145d498 9575eb1a63d7]
	I0814 09:48:54.761427    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:48:54.772398    4019 logs.go:276] 1 containers: [4289049e4ff9]
	I0814 09:48:54.772467    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:48:54.788190    4019 logs.go:276] 2 containers: [4731b37cec82 af86f8f14004]
	I0814 09:48:54.788256    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:48:54.797655    4019 logs.go:276] 0 containers: []
	W0814 09:48:54.797665    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:48:54.797719    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:48:54.816634    4019 logs.go:276] 2 containers: [d58abbf5afb4 3e7bc04c8fa4]
	I0814 09:48:54.816650    4019 logs.go:123] Gathering logs for kube-proxy [4289049e4ff9] ...
	I0814 09:48:54.816656    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4289049e4ff9"
	I0814 09:48:54.828280    4019 logs.go:123] Gathering logs for kube-scheduler [9575eb1a63d7] ...
	I0814 09:48:54.828292    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9575eb1a63d7"
	I0814 09:48:54.847793    4019 logs.go:123] Gathering logs for kube-controller-manager [af86f8f14004] ...
	I0814 09:48:54.847803    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af86f8f14004"
	I0814 09:48:54.862339    4019 logs.go:123] Gathering logs for storage-provisioner [d58abbf5afb4] ...
	I0814 09:48:54.862349    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58abbf5afb4"
	I0814 09:48:54.874340    4019 logs.go:123] Gathering logs for storage-provisioner [3e7bc04c8fa4] ...
	I0814 09:48:54.874350    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e7bc04c8fa4"
	I0814 09:48:54.885291    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:48:54.885302    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:48:54.889973    4019 logs.go:123] Gathering logs for kube-apiserver [7e949e3a70a3] ...
	I0814 09:48:54.889979    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e949e3a70a3"
	I0814 09:48:54.914012    4019 logs.go:123] Gathering logs for etcd [53f6a02861b1] ...
	I0814 09:48:54.914022    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53f6a02861b1"
	I0814 09:48:54.927636    4019 logs.go:123] Gathering logs for etcd [1c40d2ec1695] ...
	I0814 09:48:54.927645    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c40d2ec1695"
	I0814 09:48:54.941648    4019 logs.go:123] Gathering logs for coredns [c1a47e5fa9ce] ...
	I0814 09:48:54.941660    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a47e5fa9ce"
	I0814 09:48:54.952761    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:48:54.952774    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:48:54.978307    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:48:54.978315    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:48:55.015870    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:48:55.015876    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:48:55.050192    4019 logs.go:123] Gathering logs for kube-apiserver [a0e97d1f2e98] ...
	I0814 09:48:55.050203    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e97d1f2e98"
	I0814 09:48:55.065084    4019 logs.go:123] Gathering logs for kube-scheduler [e69eb145d498] ...
	I0814 09:48:55.065094    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e69eb145d498"
	I0814 09:48:55.076987    4019 logs.go:123] Gathering logs for kube-controller-manager [4731b37cec82] ...
	I0814 09:48:55.077000    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4731b37cec82"
	I0814 09:48:55.096636    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:48:55.096646    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:48:57.610654    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:48:57.597372    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:48:57.597837    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:48:57.637671    4033 logs.go:276] 2 containers: [351b37282b78 8cc17453c508]
	I0814 09:48:57.637808    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:48:57.660037    4033 logs.go:276] 2 containers: [9f25f6d89e86 03512631e6e0]
	I0814 09:48:57.660153    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:48:57.674971    4033 logs.go:276] 1 containers: [b4e8c942b977]
	I0814 09:48:57.675044    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:48:57.687913    4033 logs.go:276] 2 containers: [c4ee65912640 f85fd17e0cb2]
	I0814 09:48:57.687987    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:48:57.698978    4033 logs.go:276] 1 containers: [451ede5263eb]
	I0814 09:48:57.699045    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:48:57.709821    4033 logs.go:276] 2 containers: [2b8801fcf8bf 053e1a0d063c]
	I0814 09:48:57.709898    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:48:57.720027    4033 logs.go:276] 0 containers: []
	W0814 09:48:57.720038    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:48:57.720096    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:48:57.731087    4033 logs.go:276] 2 containers: [262a2b312a14 7c16146209d8]
	I0814 09:48:57.731104    4033 logs.go:123] Gathering logs for storage-provisioner [7c16146209d8] ...
	I0814 09:48:57.731110    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c16146209d8"
	I0814 09:48:57.742605    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:48:57.742616    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:48:57.780654    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:48:57.780663    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:48:57.785298    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:48:57.785309    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:48:57.820845    4033 logs.go:123] Gathering logs for kube-apiserver [351b37282b78] ...
	I0814 09:48:57.820854    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351b37282b78"
	I0814 09:48:57.834748    4033 logs.go:123] Gathering logs for kube-controller-manager [2b8801fcf8bf] ...
	I0814 09:48:57.834758    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b8801fcf8bf"
	I0814 09:48:57.852402    4033 logs.go:123] Gathering logs for kube-apiserver [8cc17453c508] ...
	I0814 09:48:57.852412    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc17453c508"
	I0814 09:48:57.876725    4033 logs.go:123] Gathering logs for etcd [9f25f6d89e86] ...
	I0814 09:48:57.876734    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f25f6d89e86"
	I0814 09:48:57.890892    4033 logs.go:123] Gathering logs for kube-scheduler [c4ee65912640] ...
	I0814 09:48:57.890904    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4ee65912640"
	I0814 09:48:57.902956    4033 logs.go:123] Gathering logs for etcd [03512631e6e0] ...
	I0814 09:48:57.902972    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03512631e6e0"
	I0814 09:48:57.917236    4033 logs.go:123] Gathering logs for coredns [b4e8c942b977] ...
	I0814 09:48:57.917247    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8c942b977"
	I0814 09:48:57.928331    4033 logs.go:123] Gathering logs for storage-provisioner [262a2b312a14] ...
	I0814 09:48:57.928342    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262a2b312a14"
	I0814 09:48:57.944448    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:48:57.944463    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:48:57.970120    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:48:57.970129    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:48:57.981975    4033 logs.go:123] Gathering logs for kube-scheduler [f85fd17e0cb2] ...
	I0814 09:48:57.981989    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f85fd17e0cb2"
	I0814 09:48:58.000240    4033 logs.go:123] Gathering logs for kube-proxy [451ede5263eb] ...
	I0814 09:48:58.000251    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451ede5263eb"
	I0814 09:48:58.012366    4033 logs.go:123] Gathering logs for kube-controller-manager [053e1a0d063c] ...
	I0814 09:48:58.012379    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053e1a0d063c"
	I0814 09:49:00.525445    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:49:02.613129    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:49:02.613484    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:49:02.647146    4019 logs.go:276] 2 containers: [a0e97d1f2e98 7e949e3a70a3]
	I0814 09:49:02.647282    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:49:02.666430    4019 logs.go:276] 2 containers: [53f6a02861b1 1c40d2ec1695]
	I0814 09:49:02.666526    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:49:02.680934    4019 logs.go:276] 1 containers: [c1a47e5fa9ce]
	I0814 09:49:02.681019    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:49:02.694850    4019 logs.go:276] 2 containers: [e69eb145d498 9575eb1a63d7]
	I0814 09:49:02.694926    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:49:02.705774    4019 logs.go:276] 1 containers: [4289049e4ff9]
	I0814 09:49:02.705842    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:49:02.720140    4019 logs.go:276] 2 containers: [4731b37cec82 af86f8f14004]
	I0814 09:49:02.720213    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:49:02.730291    4019 logs.go:276] 0 containers: []
	W0814 09:49:02.730303    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:49:02.730360    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:49:02.740775    4019 logs.go:276] 2 containers: [d58abbf5afb4 3e7bc04c8fa4]
	I0814 09:49:02.740791    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:49:02.740797    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:49:02.784831    4019 logs.go:123] Gathering logs for kube-apiserver [a0e97d1f2e98] ...
	I0814 09:49:02.784846    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e97d1f2e98"
	I0814 09:49:02.799825    4019 logs.go:123] Gathering logs for kube-apiserver [7e949e3a70a3] ...
	I0814 09:49:02.799837    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e949e3a70a3"
	I0814 09:49:02.824665    4019 logs.go:123] Gathering logs for coredns [c1a47e5fa9ce] ...
	I0814 09:49:02.824677    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a47e5fa9ce"
	I0814 09:49:02.835536    4019 logs.go:123] Gathering logs for kube-scheduler [e69eb145d498] ...
	I0814 09:49:02.835548    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e69eb145d498"
	I0814 09:49:02.847065    4019 logs.go:123] Gathering logs for storage-provisioner [3e7bc04c8fa4] ...
	I0814 09:49:02.847076    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e7bc04c8fa4"
	I0814 09:49:02.859124    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:49:02.859139    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:49:02.897523    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:49:02.897533    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:49:02.901689    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:49:02.901697    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:49:02.924650    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:49:02.924658    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:49:02.935740    4019 logs.go:123] Gathering logs for kube-controller-manager [af86f8f14004] ...
	I0814 09:49:02.935753    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af86f8f14004"
	I0814 09:49:02.949902    4019 logs.go:123] Gathering logs for storage-provisioner [d58abbf5afb4] ...
	I0814 09:49:02.949915    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58abbf5afb4"
	I0814 09:49:02.961910    4019 logs.go:123] Gathering logs for kube-scheduler [9575eb1a63d7] ...
	I0814 09:49:02.961921    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9575eb1a63d7"
	I0814 09:49:02.978496    4019 logs.go:123] Gathering logs for kube-proxy [4289049e4ff9] ...
	I0814 09:49:02.978508    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4289049e4ff9"
	I0814 09:49:02.990177    4019 logs.go:123] Gathering logs for kube-controller-manager [4731b37cec82] ...
	I0814 09:49:02.990191    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4731b37cec82"
	I0814 09:49:03.007810    4019 logs.go:123] Gathering logs for etcd [53f6a02861b1] ...
	I0814 09:49:03.007822    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53f6a02861b1"
	I0814 09:49:03.026819    4019 logs.go:123] Gathering logs for etcd [1c40d2ec1695] ...
	I0814 09:49:03.026829    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c40d2ec1695"
	I0814 09:49:05.527745    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:49:05.528126    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:49:05.561471    4033 logs.go:276] 2 containers: [351b37282b78 8cc17453c508]
	I0814 09:49:05.561598    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:49:05.580956    4033 logs.go:276] 2 containers: [9f25f6d89e86 03512631e6e0]
	I0814 09:49:05.581079    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:49:05.594819    4033 logs.go:276] 1 containers: [b4e8c942b977]
	I0814 09:49:05.594896    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:49:05.608123    4033 logs.go:276] 2 containers: [c4ee65912640 f85fd17e0cb2]
	I0814 09:49:05.608200    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:49:05.618923    4033 logs.go:276] 1 containers: [451ede5263eb]
	I0814 09:49:05.618996    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:49:05.630082    4033 logs.go:276] 2 containers: [2b8801fcf8bf 053e1a0d063c]
	I0814 09:49:05.630151    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:49:05.640642    4033 logs.go:276] 0 containers: []
	W0814 09:49:05.640661    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:49:05.640720    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:49:05.651714    4033 logs.go:276] 2 containers: [262a2b312a14 7c16146209d8]
	I0814 09:49:05.651734    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:49:05.651740    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:49:05.541824    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:49:05.687642    4033 logs.go:123] Gathering logs for kube-apiserver [351b37282b78] ...
	I0814 09:49:05.687653    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351b37282b78"
	I0814 09:49:05.703630    4033 logs.go:123] Gathering logs for coredns [b4e8c942b977] ...
	I0814 09:49:05.703640    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8c942b977"
	I0814 09:49:05.715804    4033 logs.go:123] Gathering logs for kube-scheduler [c4ee65912640] ...
	I0814 09:49:05.715816    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4ee65912640"
	I0814 09:49:05.728032    4033 logs.go:123] Gathering logs for kube-controller-manager [053e1a0d063c] ...
	I0814 09:49:05.728043    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053e1a0d063c"
	I0814 09:49:05.739572    4033 logs.go:123] Gathering logs for storage-provisioner [262a2b312a14] ...
	I0814 09:49:05.739584    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262a2b312a14"
	I0814 09:49:05.751056    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:49:05.751067    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:49:05.755653    4033 logs.go:123] Gathering logs for kube-scheduler [f85fd17e0cb2] ...
	I0814 09:49:05.755659    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f85fd17e0cb2"
	I0814 09:49:05.769278    4033 logs.go:123] Gathering logs for storage-provisioner [7c16146209d8] ...
	I0814 09:49:05.769289    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c16146209d8"
	I0814 09:49:05.785194    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:49:05.785209    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:49:05.797409    4033 logs.go:123] Gathering logs for etcd [9f25f6d89e86] ...
	I0814 09:49:05.797419    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f25f6d89e86"
	I0814 09:49:05.811468    4033 logs.go:123] Gathering logs for kube-apiserver [8cc17453c508] ...
	I0814 09:49:05.811480    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc17453c508"
	I0814 09:49:05.835019    4033 logs.go:123] Gathering logs for kube-proxy [451ede5263eb] ...
	I0814 09:49:05.835029    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451ede5263eb"
	I0814 09:49:05.847138    4033 logs.go:123] Gathering logs for kube-controller-manager [2b8801fcf8bf] ...
	I0814 09:49:05.847150    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b8801fcf8bf"
	I0814 09:49:05.864816    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:49:05.864827    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:49:05.890452    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:49:05.890461    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:49:05.926893    4033 logs.go:123] Gathering logs for etcd [03512631e6e0] ...
	I0814 09:49:05.926901    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03512631e6e0"
	I0814 09:49:08.442797    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:49:10.544108    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:49:10.544450    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:49:10.577910    4019 logs.go:276] 2 containers: [a0e97d1f2e98 7e949e3a70a3]
	I0814 09:49:10.578042    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:49:10.596152    4019 logs.go:276] 2 containers: [53f6a02861b1 1c40d2ec1695]
	I0814 09:49:10.596246    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:49:10.612241    4019 logs.go:276] 1 containers: [c1a47e5fa9ce]
	I0814 09:49:10.612313    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:49:10.623033    4019 logs.go:276] 2 containers: [e69eb145d498 9575eb1a63d7]
	I0814 09:49:10.623098    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:49:10.633317    4019 logs.go:276] 1 containers: [4289049e4ff9]
	I0814 09:49:10.633376    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:49:10.644113    4019 logs.go:276] 2 containers: [4731b37cec82 af86f8f14004]
	I0814 09:49:10.644187    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:49:10.659537    4019 logs.go:276] 0 containers: []
	W0814 09:49:10.659548    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:49:10.659601    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:49:10.670633    4019 logs.go:276] 2 containers: [d58abbf5afb4 3e7bc04c8fa4]
	I0814 09:49:10.670653    4019 logs.go:123] Gathering logs for storage-provisioner [3e7bc04c8fa4] ...
	I0814 09:49:10.670658    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e7bc04c8fa4"
	I0814 09:49:10.684653    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:49:10.684665    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:49:10.721620    4019 logs.go:123] Gathering logs for kube-apiserver [7e949e3a70a3] ...
	I0814 09:49:10.721629    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e949e3a70a3"
	I0814 09:49:10.746667    4019 logs.go:123] Gathering logs for coredns [c1a47e5fa9ce] ...
	I0814 09:49:10.746678    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a47e5fa9ce"
	I0814 09:49:10.757973    4019 logs.go:123] Gathering logs for kube-scheduler [9575eb1a63d7] ...
	I0814 09:49:10.757984    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9575eb1a63d7"
	I0814 09:49:10.772743    4019 logs.go:123] Gathering logs for kube-controller-manager [af86f8f14004] ...
	I0814 09:49:10.772753    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af86f8f14004"
	I0814 09:49:10.786694    4019 logs.go:123] Gathering logs for storage-provisioner [d58abbf5afb4] ...
	I0814 09:49:10.786704    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58abbf5afb4"
	I0814 09:49:10.799168    4019 logs.go:123] Gathering logs for kube-scheduler [e69eb145d498] ...
	I0814 09:49:10.799181    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e69eb145d498"
	I0814 09:49:10.810736    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:49:10.810747    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:49:10.846259    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:49:10.846270    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:49:10.872964    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:49:10.872975    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:49:10.885100    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:49:10.885118    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:49:10.889529    4019 logs.go:123] Gathering logs for kube-apiserver [a0e97d1f2e98] ...
	I0814 09:49:10.889536    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e97d1f2e98"
	I0814 09:49:10.903686    4019 logs.go:123] Gathering logs for etcd [53f6a02861b1] ...
	I0814 09:49:10.903706    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53f6a02861b1"
	I0814 09:49:10.918427    4019 logs.go:123] Gathering logs for etcd [1c40d2ec1695] ...
	I0814 09:49:10.918436    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c40d2ec1695"
	I0814 09:49:10.935164    4019 logs.go:123] Gathering logs for kube-proxy [4289049e4ff9] ...
	I0814 09:49:10.935174    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4289049e4ff9"
	I0814 09:49:10.947168    4019 logs.go:123] Gathering logs for kube-controller-manager [4731b37cec82] ...
	I0814 09:49:10.947178    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4731b37cec82"
	I0814 09:49:13.466539    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:49:13.445370    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:49:13.445634    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:49:13.475669    4033 logs.go:276] 2 containers: [351b37282b78 8cc17453c508]
	I0814 09:49:13.475784    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:49:13.495314    4033 logs.go:276] 2 containers: [9f25f6d89e86 03512631e6e0]
	I0814 09:49:13.495419    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:49:13.509571    4033 logs.go:276] 1 containers: [b4e8c942b977]
	I0814 09:49:13.509649    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:49:13.521528    4033 logs.go:276] 2 containers: [c4ee65912640 f85fd17e0cb2]
	I0814 09:49:13.521597    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:49:13.532108    4033 logs.go:276] 1 containers: [451ede5263eb]
	I0814 09:49:13.532174    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:49:13.543304    4033 logs.go:276] 2 containers: [2b8801fcf8bf 053e1a0d063c]
	I0814 09:49:13.543370    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:49:13.553873    4033 logs.go:276] 0 containers: []
	W0814 09:49:13.553884    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:49:13.553942    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:49:13.564554    4033 logs.go:276] 2 containers: [262a2b312a14 7c16146209d8]
	I0814 09:49:13.564573    4033 logs.go:123] Gathering logs for etcd [9f25f6d89e86] ...
	I0814 09:49:13.564578    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f25f6d89e86"
	I0814 09:49:13.578276    4033 logs.go:123] Gathering logs for kube-proxy [451ede5263eb] ...
	I0814 09:49:13.578286    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451ede5263eb"
	I0814 09:49:13.589789    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:49:13.589799    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:49:13.626976    4033 logs.go:123] Gathering logs for kube-apiserver [8cc17453c508] ...
	I0814 09:49:13.626984    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc17453c508"
	I0814 09:49:13.652621    4033 logs.go:123] Gathering logs for kube-controller-manager [2b8801fcf8bf] ...
	I0814 09:49:13.652630    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b8801fcf8bf"
	I0814 09:49:13.670179    4033 logs.go:123] Gathering logs for storage-provisioner [262a2b312a14] ...
	I0814 09:49:13.670189    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262a2b312a14"
	I0814 09:49:13.682202    4033 logs.go:123] Gathering logs for storage-provisioner [7c16146209d8] ...
	I0814 09:49:13.682214    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c16146209d8"
	I0814 09:49:13.694565    4033 logs.go:123] Gathering logs for etcd [03512631e6e0] ...
	I0814 09:49:13.694577    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03512631e6e0"
	I0814 09:49:13.709693    4033 logs.go:123] Gathering logs for kube-scheduler [f85fd17e0cb2] ...
	I0814 09:49:13.709707    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f85fd17e0cb2"
	I0814 09:49:13.723735    4033 logs.go:123] Gathering logs for kube-controller-manager [053e1a0d063c] ...
	I0814 09:49:13.723745    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053e1a0d063c"
	I0814 09:49:13.743985    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:49:13.743994    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:49:13.770613    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:49:13.770620    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:49:13.781986    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:49:13.782003    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:49:13.786730    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:49:13.786740    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:49:13.822733    4033 logs.go:123] Gathering logs for kube-apiserver [351b37282b78] ...
	I0814 09:49:13.822746    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351b37282b78"
	I0814 09:49:13.837639    4033 logs.go:123] Gathering logs for coredns [b4e8c942b977] ...
	I0814 09:49:13.837650    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8c942b977"
	I0814 09:49:13.848808    4033 logs.go:123] Gathering logs for kube-scheduler [c4ee65912640] ...
	I0814 09:49:13.848820    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4ee65912640"
	I0814 09:49:18.468699    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:49:18.468961    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:49:18.494369    4019 logs.go:276] 2 containers: [a0e97d1f2e98 7e949e3a70a3]
	I0814 09:49:18.494477    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:49:18.512483    4019 logs.go:276] 2 containers: [53f6a02861b1 1c40d2ec1695]
	I0814 09:49:18.512566    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:49:18.527156    4019 logs.go:276] 1 containers: [c1a47e5fa9ce]
	I0814 09:49:18.527232    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:49:18.538158    4019 logs.go:276] 2 containers: [e69eb145d498 9575eb1a63d7]
	I0814 09:49:18.538250    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:49:18.551740    4019 logs.go:276] 1 containers: [4289049e4ff9]
	I0814 09:49:18.551809    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:49:18.567688    4019 logs.go:276] 2 containers: [4731b37cec82 af86f8f14004]
	I0814 09:49:18.567752    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:49:18.577825    4019 logs.go:276] 0 containers: []
	W0814 09:49:18.577836    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:49:18.577893    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:49:18.588410    4019 logs.go:276] 2 containers: [d58abbf5afb4 3e7bc04c8fa4]
	I0814 09:49:18.588427    4019 logs.go:123] Gathering logs for etcd [53f6a02861b1] ...
	I0814 09:49:18.588432    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53f6a02861b1"
	I0814 09:49:18.602004    4019 logs.go:123] Gathering logs for storage-provisioner [d58abbf5afb4] ...
	I0814 09:49:18.602018    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58abbf5afb4"
	I0814 09:49:18.613563    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:49:18.613574    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:49:18.637045    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:49:18.637052    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:49:18.674857    4019 logs.go:123] Gathering logs for kube-apiserver [7e949e3a70a3] ...
	I0814 09:49:18.674865    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e949e3a70a3"
	I0814 09:49:18.702346    4019 logs.go:123] Gathering logs for kube-scheduler [9575eb1a63d7] ...
	I0814 09:49:18.702357    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9575eb1a63d7"
	I0814 09:49:18.717224    4019 logs.go:123] Gathering logs for kube-controller-manager [4731b37cec82] ...
	I0814 09:49:18.717233    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4731b37cec82"
	I0814 09:49:18.735242    4019 logs.go:123] Gathering logs for kube-scheduler [e69eb145d498] ...
	I0814 09:49:18.735251    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e69eb145d498"
	I0814 09:49:18.747012    4019 logs.go:123] Gathering logs for kube-proxy [4289049e4ff9] ...
	I0814 09:49:18.747023    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4289049e4ff9"
	I0814 09:49:18.758819    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:49:18.758830    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:49:18.762762    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:49:18.762771    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:49:18.797268    4019 logs.go:123] Gathering logs for etcd [1c40d2ec1695] ...
	I0814 09:49:18.797282    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c40d2ec1695"
	I0814 09:49:18.818358    4019 logs.go:123] Gathering logs for coredns [c1a47e5fa9ce] ...
	I0814 09:49:18.818368    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a47e5fa9ce"
	I0814 09:49:18.844576    4019 logs.go:123] Gathering logs for kube-apiserver [a0e97d1f2e98] ...
	I0814 09:49:18.844591    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e97d1f2e98"
	I0814 09:49:18.864915    4019 logs.go:123] Gathering logs for kube-controller-manager [af86f8f14004] ...
	I0814 09:49:18.864926    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af86f8f14004"
	I0814 09:49:18.882573    4019 logs.go:123] Gathering logs for storage-provisioner [3e7bc04c8fa4] ...
	I0814 09:49:18.882584    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e7bc04c8fa4"
	I0814 09:49:18.898474    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:49:18.898488    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:49:16.362859    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:49:21.412135    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:49:21.365097    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:49:21.365246    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:49:21.380049    4033 logs.go:276] 2 containers: [351b37282b78 8cc17453c508]
	I0814 09:49:21.380143    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:49:21.392647    4033 logs.go:276] 2 containers: [9f25f6d89e86 03512631e6e0]
	I0814 09:49:21.392723    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:49:21.403412    4033 logs.go:276] 1 containers: [b4e8c942b977]
	I0814 09:49:21.403480    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:49:21.413611    4033 logs.go:276] 2 containers: [c4ee65912640 f85fd17e0cb2]
	I0814 09:49:21.413670    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:49:21.432310    4033 logs.go:276] 1 containers: [451ede5263eb]
	I0814 09:49:21.432380    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:49:21.443004    4033 logs.go:276] 2 containers: [2b8801fcf8bf 053e1a0d063c]
	I0814 09:49:21.443075    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:49:21.454015    4033 logs.go:276] 0 containers: []
	W0814 09:49:21.454030    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:49:21.454089    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:49:21.464583    4033 logs.go:276] 2 containers: [262a2b312a14 7c16146209d8]
	I0814 09:49:21.464604    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:49:21.464609    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:49:21.490928    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:49:21.490937    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:49:21.502631    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:49:21.502644    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:49:21.540557    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:49:21.540567    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:49:21.575537    4033 logs.go:123] Gathering logs for etcd [9f25f6d89e86] ...
	I0814 09:49:21.575549    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f25f6d89e86"
	I0814 09:49:21.590769    4033 logs.go:123] Gathering logs for kube-scheduler [f85fd17e0cb2] ...
	I0814 09:49:21.590781    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f85fd17e0cb2"
	I0814 09:49:21.604412    4033 logs.go:123] Gathering logs for kube-proxy [451ede5263eb] ...
	I0814 09:49:21.604425    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451ede5263eb"
	I0814 09:49:21.615859    4033 logs.go:123] Gathering logs for kube-controller-manager [2b8801fcf8bf] ...
	I0814 09:49:21.615874    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b8801fcf8bf"
	I0814 09:49:21.634431    4033 logs.go:123] Gathering logs for kube-scheduler [c4ee65912640] ...
	I0814 09:49:21.634440    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4ee65912640"
	I0814 09:49:21.646850    4033 logs.go:123] Gathering logs for kube-controller-manager [053e1a0d063c] ...
	I0814 09:49:21.646864    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053e1a0d063c"
	I0814 09:49:21.658413    4033 logs.go:123] Gathering logs for storage-provisioner [7c16146209d8] ...
	I0814 09:49:21.658425    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c16146209d8"
	I0814 09:49:21.669512    4033 logs.go:123] Gathering logs for kube-apiserver [351b37282b78] ...
	I0814 09:49:21.669523    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351b37282b78"
	I0814 09:49:21.687776    4033 logs.go:123] Gathering logs for etcd [03512631e6e0] ...
	I0814 09:49:21.687786    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03512631e6e0"
	I0814 09:49:21.706472    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:49:21.706483    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:49:21.710929    4033 logs.go:123] Gathering logs for kube-apiserver [8cc17453c508] ...
	I0814 09:49:21.710935    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc17453c508"
	I0814 09:49:21.738055    4033 logs.go:123] Gathering logs for coredns [b4e8c942b977] ...
	I0814 09:49:21.738069    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8c942b977"
	I0814 09:49:21.751016    4033 logs.go:123] Gathering logs for storage-provisioner [262a2b312a14] ...
	I0814 09:49:21.751029    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262a2b312a14"
	I0814 09:49:24.264865    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:49:26.414130    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:49:26.414327    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:49:26.432327    4019 logs.go:276] 2 containers: [a0e97d1f2e98 7e949e3a70a3]
	I0814 09:49:26.432417    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:49:26.448522    4019 logs.go:276] 2 containers: [53f6a02861b1 1c40d2ec1695]
	I0814 09:49:26.448598    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:49:26.460361    4019 logs.go:276] 1 containers: [c1a47e5fa9ce]
	I0814 09:49:26.460441    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:49:26.471102    4019 logs.go:276] 2 containers: [e69eb145d498 9575eb1a63d7]
	I0814 09:49:26.471167    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:49:26.481598    4019 logs.go:276] 1 containers: [4289049e4ff9]
	I0814 09:49:26.481658    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:49:26.493414    4019 logs.go:276] 2 containers: [4731b37cec82 af86f8f14004]
	I0814 09:49:26.493512    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:49:26.503784    4019 logs.go:276] 0 containers: []
	W0814 09:49:26.503796    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:49:26.503853    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:49:26.514270    4019 logs.go:276] 2 containers: [d58abbf5afb4 3e7bc04c8fa4]
	I0814 09:49:26.514288    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:49:26.514293    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:49:26.518488    4019 logs.go:123] Gathering logs for etcd [1c40d2ec1695] ...
	I0814 09:49:26.518495    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c40d2ec1695"
	I0814 09:49:26.532528    4019 logs.go:123] Gathering logs for kube-proxy [4289049e4ff9] ...
	I0814 09:49:26.532538    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4289049e4ff9"
	I0814 09:49:26.543982    4019 logs.go:123] Gathering logs for storage-provisioner [3e7bc04c8fa4] ...
	I0814 09:49:26.543992    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e7bc04c8fa4"
	I0814 09:49:26.556610    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:49:26.556622    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:49:26.569445    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:49:26.569456    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:49:26.603654    4019 logs.go:123] Gathering logs for kube-apiserver [a0e97d1f2e98] ...
	I0814 09:49:26.603665    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e97d1f2e98"
	I0814 09:49:26.621177    4019 logs.go:123] Gathering logs for kube-scheduler [9575eb1a63d7] ...
	I0814 09:49:26.621189    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9575eb1a63d7"
	I0814 09:49:26.636296    4019 logs.go:123] Gathering logs for storage-provisioner [d58abbf5afb4] ...
	I0814 09:49:26.636307    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58abbf5afb4"
	I0814 09:49:26.649591    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:49:26.649602    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:49:26.674317    4019 logs.go:123] Gathering logs for kube-apiserver [7e949e3a70a3] ...
	I0814 09:49:26.674328    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e949e3a70a3"
	I0814 09:49:26.701195    4019 logs.go:123] Gathering logs for etcd [53f6a02861b1] ...
	I0814 09:49:26.701207    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53f6a02861b1"
	I0814 09:49:26.715682    4019 logs.go:123] Gathering logs for coredns [c1a47e5fa9ce] ...
	I0814 09:49:26.715692    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a47e5fa9ce"
	I0814 09:49:26.727327    4019 logs.go:123] Gathering logs for kube-controller-manager [af86f8f14004] ...
	I0814 09:49:26.727340    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af86f8f14004"
	I0814 09:49:26.741354    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:49:26.741365    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:49:26.780218    4019 logs.go:123] Gathering logs for kube-scheduler [e69eb145d498] ...
	I0814 09:49:26.780229    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e69eb145d498"
	I0814 09:49:26.800135    4019 logs.go:123] Gathering logs for kube-controller-manager [4731b37cec82] ...
	I0814 09:49:26.800145    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4731b37cec82"
	I0814 09:49:29.320205    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:49:29.266830    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:49:29.266975    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:49:29.279192    4033 logs.go:276] 2 containers: [351b37282b78 8cc17453c508]
	I0814 09:49:29.279271    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:49:29.291089    4033 logs.go:276] 2 containers: [9f25f6d89e86 03512631e6e0]
	I0814 09:49:29.291158    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:49:29.301520    4033 logs.go:276] 1 containers: [b4e8c942b977]
	I0814 09:49:29.301586    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:49:29.311805    4033 logs.go:276] 2 containers: [c4ee65912640 f85fd17e0cb2]
	I0814 09:49:29.311874    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:49:29.322231    4033 logs.go:276] 1 containers: [451ede5263eb]
	I0814 09:49:29.322285    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:49:29.345997    4033 logs.go:276] 2 containers: [2b8801fcf8bf 053e1a0d063c]
	I0814 09:49:29.346070    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:49:29.358213    4033 logs.go:276] 0 containers: []
	W0814 09:49:29.358225    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:49:29.358285    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:49:29.368653    4033 logs.go:276] 2 containers: [262a2b312a14 7c16146209d8]
	I0814 09:49:29.368670    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:49:29.368675    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:49:29.373748    4033 logs.go:123] Gathering logs for kube-proxy [451ede5263eb] ...
	I0814 09:49:29.373757    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451ede5263eb"
	I0814 09:49:29.385682    4033 logs.go:123] Gathering logs for storage-provisioner [7c16146209d8] ...
	I0814 09:49:29.385694    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c16146209d8"
	I0814 09:49:29.397199    4033 logs.go:123] Gathering logs for etcd [9f25f6d89e86] ...
	I0814 09:49:29.397214    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f25f6d89e86"
	I0814 09:49:29.411566    4033 logs.go:123] Gathering logs for etcd [03512631e6e0] ...
	I0814 09:49:29.411577    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03512631e6e0"
	I0814 09:49:29.426316    4033 logs.go:123] Gathering logs for kube-scheduler [c4ee65912640] ...
	I0814 09:49:29.426326    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4ee65912640"
	I0814 09:49:29.437870    4033 logs.go:123] Gathering logs for kube-scheduler [f85fd17e0cb2] ...
	I0814 09:49:29.437883    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f85fd17e0cb2"
	I0814 09:49:29.452098    4033 logs.go:123] Gathering logs for coredns [b4e8c942b977] ...
	I0814 09:49:29.452110    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8c942b977"
	I0814 09:49:29.463030    4033 logs.go:123] Gathering logs for kube-controller-manager [053e1a0d063c] ...
	I0814 09:49:29.463042    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053e1a0d063c"
	I0814 09:49:29.481837    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:49:29.481849    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:49:29.507242    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:49:29.507251    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:49:29.544065    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:49:29.544074    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:49:29.579771    4033 logs.go:123] Gathering logs for kube-apiserver [351b37282b78] ...
	I0814 09:49:29.579785    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351b37282b78"
	I0814 09:49:29.594567    4033 logs.go:123] Gathering logs for kube-apiserver [8cc17453c508] ...
	I0814 09:49:29.594579    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc17453c508"
	I0814 09:49:29.618304    4033 logs.go:123] Gathering logs for kube-controller-manager [2b8801fcf8bf] ...
	I0814 09:49:29.618318    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b8801fcf8bf"
	I0814 09:49:29.642420    4033 logs.go:123] Gathering logs for storage-provisioner [262a2b312a14] ...
	I0814 09:49:29.642431    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262a2b312a14"
	I0814 09:49:29.654252    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:49:29.654262    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:49:34.322199    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:49:34.322375    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:49:34.340332    4019 logs.go:276] 2 containers: [a0e97d1f2e98 7e949e3a70a3]
	I0814 09:49:34.340433    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:49:34.353939    4019 logs.go:276] 2 containers: [53f6a02861b1 1c40d2ec1695]
	I0814 09:49:34.354013    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:49:34.365122    4019 logs.go:276] 1 containers: [c1a47e5fa9ce]
	I0814 09:49:34.365180    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:49:34.375574    4019 logs.go:276] 2 containers: [e69eb145d498 9575eb1a63d7]
	I0814 09:49:34.375650    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:49:34.387727    4019 logs.go:276] 1 containers: [4289049e4ff9]
	I0814 09:49:34.387792    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:49:34.398919    4019 logs.go:276] 2 containers: [4731b37cec82 af86f8f14004]
	I0814 09:49:34.398986    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:49:34.409493    4019 logs.go:276] 0 containers: []
	W0814 09:49:34.409505    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:49:34.409561    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:49:34.419767    4019 logs.go:276] 2 containers: [d58abbf5afb4 3e7bc04c8fa4]
	I0814 09:49:34.419783    4019 logs.go:123] Gathering logs for etcd [53f6a02861b1] ...
	I0814 09:49:34.419788    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53f6a02861b1"
	I0814 09:49:34.433110    4019 logs.go:123] Gathering logs for kube-scheduler [e69eb145d498] ...
	I0814 09:49:34.433120    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e69eb145d498"
	I0814 09:49:34.444974    4019 logs.go:123] Gathering logs for kube-scheduler [9575eb1a63d7] ...
	I0814 09:49:34.444984    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9575eb1a63d7"
	I0814 09:49:34.459571    4019 logs.go:123] Gathering logs for kube-proxy [4289049e4ff9] ...
	I0814 09:49:34.459579    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4289049e4ff9"
	I0814 09:49:32.168776    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:49:34.471099    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:49:34.471107    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:49:34.495038    4019 logs.go:123] Gathering logs for kube-apiserver [a0e97d1f2e98] ...
	I0814 09:49:34.495049    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e97d1f2e98"
	I0814 09:49:34.509438    4019 logs.go:123] Gathering logs for kube-apiserver [7e949e3a70a3] ...
	I0814 09:49:34.509451    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e949e3a70a3"
	I0814 09:49:34.534727    4019 logs.go:123] Gathering logs for storage-provisioner [d58abbf5afb4] ...
	I0814 09:49:34.534740    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58abbf5afb4"
	I0814 09:49:34.545828    4019 logs.go:123] Gathering logs for storage-provisioner [3e7bc04c8fa4] ...
	I0814 09:49:34.545837    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e7bc04c8fa4"
	I0814 09:49:34.557438    4019 logs.go:123] Gathering logs for kube-controller-manager [4731b37cec82] ...
	I0814 09:49:34.557449    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4731b37cec82"
	I0814 09:49:34.575315    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:49:34.575330    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:49:34.613180    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:49:34.613187    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:49:34.646528    4019 logs.go:123] Gathering logs for etcd [1c40d2ec1695] ...
	I0814 09:49:34.646542    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c40d2ec1695"
	I0814 09:49:34.666018    4019 logs.go:123] Gathering logs for coredns [c1a47e5fa9ce] ...
	I0814 09:49:34.666031    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a47e5fa9ce"
	I0814 09:49:34.677611    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:49:34.677621    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:49:34.682985    4019 logs.go:123] Gathering logs for kube-controller-manager [af86f8f14004] ...
	I0814 09:49:34.682997    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af86f8f14004"
	I0814 09:49:34.696984    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:49:34.696993    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:49:37.212365    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:49:37.170865    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:49:37.170987    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:49:37.182157    4033 logs.go:276] 2 containers: [351b37282b78 8cc17453c508]
	I0814 09:49:37.182229    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:49:37.193315    4033 logs.go:276] 2 containers: [9f25f6d89e86 03512631e6e0]
	I0814 09:49:37.193384    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:49:37.204204    4033 logs.go:276] 1 containers: [b4e8c942b977]
	I0814 09:49:37.204270    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:49:37.215038    4033 logs.go:276] 2 containers: [c4ee65912640 f85fd17e0cb2]
	I0814 09:49:37.215093    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:49:37.226780    4033 logs.go:276] 1 containers: [451ede5263eb]
	I0814 09:49:37.226845    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:49:37.244460    4033 logs.go:276] 2 containers: [2b8801fcf8bf 053e1a0d063c]
	I0814 09:49:37.244521    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:49:37.254315    4033 logs.go:276] 0 containers: []
	W0814 09:49:37.254326    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:49:37.254378    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:49:37.264806    4033 logs.go:276] 2 containers: [262a2b312a14 7c16146209d8]
	I0814 09:49:37.264824    4033 logs.go:123] Gathering logs for kube-apiserver [8cc17453c508] ...
	I0814 09:49:37.264831    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc17453c508"
	I0814 09:49:37.288847    4033 logs.go:123] Gathering logs for etcd [9f25f6d89e86] ...
	I0814 09:49:37.288857    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f25f6d89e86"
	I0814 09:49:37.302897    4033 logs.go:123] Gathering logs for etcd [03512631e6e0] ...
	I0814 09:49:37.302905    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03512631e6e0"
	I0814 09:49:37.317720    4033 logs.go:123] Gathering logs for kube-scheduler [c4ee65912640] ...
	I0814 09:49:37.317733    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4ee65912640"
	I0814 09:49:37.329638    4033 logs.go:123] Gathering logs for kube-scheduler [f85fd17e0cb2] ...
	I0814 09:49:37.329649    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f85fd17e0cb2"
	I0814 09:49:37.343333    4033 logs.go:123] Gathering logs for kube-controller-manager [2b8801fcf8bf] ...
	I0814 09:49:37.343345    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b8801fcf8bf"
	I0814 09:49:37.360548    4033 logs.go:123] Gathering logs for kube-apiserver [351b37282b78] ...
	I0814 09:49:37.360561    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351b37282b78"
	I0814 09:49:37.374979    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:49:37.374989    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:49:37.413358    4033 logs.go:123] Gathering logs for coredns [b4e8c942b977] ...
	I0814 09:49:37.413368    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8c942b977"
	I0814 09:49:37.424334    4033 logs.go:123] Gathering logs for kube-proxy [451ede5263eb] ...
	I0814 09:49:37.424346    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451ede5263eb"
	I0814 09:49:37.436376    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:49:37.436387    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:49:37.462231    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:49:37.462242    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:49:37.500468    4033 logs.go:123] Gathering logs for storage-provisioner [262a2b312a14] ...
	I0814 09:49:37.500481    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262a2b312a14"
	I0814 09:49:37.512176    4033 logs.go:123] Gathering logs for kube-controller-manager [053e1a0d063c] ...
	I0814 09:49:37.512187    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053e1a0d063c"
	I0814 09:49:37.523556    4033 logs.go:123] Gathering logs for storage-provisioner [7c16146209d8] ...
	I0814 09:49:37.523568    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c16146209d8"
	I0814 09:49:37.534990    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:49:37.535006    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:49:37.546665    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:49:37.546678    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:49:40.052879    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:49:42.214178    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:49:42.214280    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:49:42.225907    4019 logs.go:276] 2 containers: [a0e97d1f2e98 7e949e3a70a3]
	I0814 09:49:42.225991    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:49:42.238517    4019 logs.go:276] 2 containers: [53f6a02861b1 1c40d2ec1695]
	I0814 09:49:42.238582    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:49:42.248823    4019 logs.go:276] 1 containers: [c1a47e5fa9ce]
	I0814 09:49:42.248888    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:49:42.259844    4019 logs.go:276] 2 containers: [e69eb145d498 9575eb1a63d7]
	I0814 09:49:42.259915    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:49:42.269984    4019 logs.go:276] 1 containers: [4289049e4ff9]
	I0814 09:49:42.270052    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:49:42.280693    4019 logs.go:276] 2 containers: [4731b37cec82 af86f8f14004]
	I0814 09:49:42.280761    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:49:42.291322    4019 logs.go:276] 0 containers: []
	W0814 09:49:42.291333    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:49:42.291389    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:49:42.301486    4019 logs.go:276] 2 containers: [d58abbf5afb4 3e7bc04c8fa4]
	I0814 09:49:42.301504    4019 logs.go:123] Gathering logs for etcd [1c40d2ec1695] ...
	I0814 09:49:42.301510    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c40d2ec1695"
	I0814 09:49:42.316086    4019 logs.go:123] Gathering logs for coredns [c1a47e5fa9ce] ...
	I0814 09:49:42.316100    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a47e5fa9ce"
	I0814 09:49:42.327454    4019 logs.go:123] Gathering logs for kube-scheduler [9575eb1a63d7] ...
	I0814 09:49:42.327468    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9575eb1a63d7"
	I0814 09:49:42.343044    4019 logs.go:123] Gathering logs for kube-controller-manager [4731b37cec82] ...
	I0814 09:49:42.343054    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4731b37cec82"
	I0814 09:49:42.361001    4019 logs.go:123] Gathering logs for storage-provisioner [3e7bc04c8fa4] ...
	I0814 09:49:42.361011    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e7bc04c8fa4"
	I0814 09:49:42.372427    4019 logs.go:123] Gathering logs for etcd [53f6a02861b1] ...
	I0814 09:49:42.372438    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53f6a02861b1"
	I0814 09:49:42.387756    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:49:42.387769    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:49:42.422013    4019 logs.go:123] Gathering logs for kube-apiserver [a0e97d1f2e98] ...
	I0814 09:49:42.422024    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e97d1f2e98"
	I0814 09:49:42.437390    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:49:42.437403    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:49:42.449667    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:49:42.449681    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:49:42.453916    4019 logs.go:123] Gathering logs for kube-apiserver [7e949e3a70a3] ...
	I0814 09:49:42.453924    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e949e3a70a3"
	I0814 09:49:42.478831    4019 logs.go:123] Gathering logs for kube-scheduler [e69eb145d498] ...
	I0814 09:49:42.478841    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e69eb145d498"
	I0814 09:49:42.491174    4019 logs.go:123] Gathering logs for kube-controller-manager [af86f8f14004] ...
	I0814 09:49:42.491188    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af86f8f14004"
	I0814 09:49:42.504783    4019 logs.go:123] Gathering logs for storage-provisioner [d58abbf5afb4] ...
	I0814 09:49:42.504795    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58abbf5afb4"
	I0814 09:49:42.516521    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:49:42.516535    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:49:42.555655    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:49:42.555664    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:49:42.579955    4019 logs.go:123] Gathering logs for kube-proxy [4289049e4ff9] ...
	I0814 09:49:42.579963    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4289049e4ff9"
	I0814 09:49:45.055583    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:49:45.056046    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:49:45.094259    4033 logs.go:276] 2 containers: [351b37282b78 8cc17453c508]
	I0814 09:49:45.094376    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:49:45.115954    4033 logs.go:276] 2 containers: [9f25f6d89e86 03512631e6e0]
	I0814 09:49:45.116062    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:49:45.132504    4033 logs.go:276] 1 containers: [b4e8c942b977]
	I0814 09:49:45.132587    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:49:45.146864    4033 logs.go:276] 2 containers: [c4ee65912640 f85fd17e0cb2]
	I0814 09:49:45.146938    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:49:45.157760    4033 logs.go:276] 1 containers: [451ede5263eb]
	I0814 09:49:45.157830    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:49:45.169360    4033 logs.go:276] 2 containers: [2b8801fcf8bf 053e1a0d063c]
	I0814 09:49:45.169437    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:49:45.180720    4033 logs.go:276] 0 containers: []
	W0814 09:49:45.180733    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:49:45.180796    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:49:45.192229    4033 logs.go:276] 2 containers: [262a2b312a14 7c16146209d8]
	I0814 09:49:45.192247    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:49:45.192253    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:49:45.197393    4033 logs.go:123] Gathering logs for kube-apiserver [8cc17453c508] ...
	I0814 09:49:45.197401    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc17453c508"
	I0814 09:49:45.221473    4033 logs.go:123] Gathering logs for kube-controller-manager [053e1a0d063c] ...
	I0814 09:49:45.221483    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053e1a0d063c"
	I0814 09:49:45.236662    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:49:45.236672    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:49:45.273967    4033 logs.go:123] Gathering logs for kube-apiserver [351b37282b78] ...
	I0814 09:49:45.273975    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351b37282b78"
	I0814 09:49:45.288336    4033 logs.go:123] Gathering logs for etcd [03512631e6e0] ...
	I0814 09:49:45.288346    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03512631e6e0"
	I0814 09:49:45.308657    4033 logs.go:123] Gathering logs for coredns [b4e8c942b977] ...
	I0814 09:49:45.308668    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8c942b977"
	I0814 09:49:45.320282    4033 logs.go:123] Gathering logs for kube-scheduler [c4ee65912640] ...
	I0814 09:49:45.320294    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4ee65912640"
	I0814 09:49:45.332762    4033 logs.go:123] Gathering logs for kube-scheduler [f85fd17e0cb2] ...
	I0814 09:49:45.332773    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f85fd17e0cb2"
	I0814 09:49:45.347676    4033 logs.go:123] Gathering logs for kube-proxy [451ede5263eb] ...
	I0814 09:49:45.347686    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451ede5263eb"
	I0814 09:49:45.363684    4033 logs.go:123] Gathering logs for kube-controller-manager [2b8801fcf8bf] ...
	I0814 09:49:45.363695    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b8801fcf8bf"
	I0814 09:49:45.381037    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:49:45.381048    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:49:45.406931    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:49:45.406938    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:49:45.419130    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:49:45.419140    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:49:45.457585    4033 logs.go:123] Gathering logs for etcd [9f25f6d89e86] ...
	I0814 09:49:45.457596    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f25f6d89e86"
	I0814 09:49:45.477092    4033 logs.go:123] Gathering logs for storage-provisioner [262a2b312a14] ...
	I0814 09:49:45.477101    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262a2b312a14"
	I0814 09:49:45.489142    4033 logs.go:123] Gathering logs for storage-provisioner [7c16146209d8] ...
	I0814 09:49:45.489152    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c16146209d8"
	I0814 09:49:45.093367    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:49:48.002872    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:49:50.095576    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:49:50.095974    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:49:50.149971    4019 logs.go:276] 2 containers: [a0e97d1f2e98 7e949e3a70a3]
	I0814 09:49:50.150089    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:49:50.172255    4019 logs.go:276] 2 containers: [53f6a02861b1 1c40d2ec1695]
	I0814 09:49:50.172337    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:49:50.186744    4019 logs.go:276] 1 containers: [c1a47e5fa9ce]
	I0814 09:49:50.186818    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:49:50.197532    4019 logs.go:276] 2 containers: [e69eb145d498 9575eb1a63d7]
	I0814 09:49:50.197607    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:49:50.208293    4019 logs.go:276] 1 containers: [4289049e4ff9]
	I0814 09:49:50.208367    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:49:50.219654    4019 logs.go:276] 2 containers: [4731b37cec82 af86f8f14004]
	I0814 09:49:50.219715    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:49:50.234306    4019 logs.go:276] 0 containers: []
	W0814 09:49:50.234318    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:49:50.234384    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:49:50.245123    4019 logs.go:276] 2 containers: [d58abbf5afb4 3e7bc04c8fa4]
	I0814 09:49:50.245142    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:49:50.245149    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:49:50.268657    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:49:50.268665    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:49:50.302013    4019 logs.go:123] Gathering logs for etcd [1c40d2ec1695] ...
	I0814 09:49:50.302024    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c40d2ec1695"
	I0814 09:49:50.316988    4019 logs.go:123] Gathering logs for coredns [c1a47e5fa9ce] ...
	I0814 09:49:50.317001    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a47e5fa9ce"
	I0814 09:49:50.329003    4019 logs.go:123] Gathering logs for kube-proxy [4289049e4ff9] ...
	I0814 09:49:50.329013    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4289049e4ff9"
	I0814 09:49:50.340684    4019 logs.go:123] Gathering logs for kube-controller-manager [af86f8f14004] ...
	I0814 09:49:50.340696    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af86f8f14004"
	I0814 09:49:50.356636    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:49:50.356646    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:49:50.361248    4019 logs.go:123] Gathering logs for kube-apiserver [a0e97d1f2e98] ...
	I0814 09:49:50.361255    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e97d1f2e98"
	I0814 09:49:50.375985    4019 logs.go:123] Gathering logs for etcd [53f6a02861b1] ...
	I0814 09:49:50.375997    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53f6a02861b1"
	I0814 09:49:50.390650    4019 logs.go:123] Gathering logs for kube-scheduler [9575eb1a63d7] ...
	I0814 09:49:50.390664    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9575eb1a63d7"
	I0814 09:49:50.405736    4019 logs.go:123] Gathering logs for kube-apiserver [7e949e3a70a3] ...
	I0814 09:49:50.405746    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e949e3a70a3"
	I0814 09:49:50.430557    4019 logs.go:123] Gathering logs for kube-scheduler [e69eb145d498] ...
	I0814 09:49:50.430568    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e69eb145d498"
	I0814 09:49:50.442496    4019 logs.go:123] Gathering logs for storage-provisioner [3e7bc04c8fa4] ...
	I0814 09:49:50.442508    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e7bc04c8fa4"
	I0814 09:49:50.454425    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:49:50.454436    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:49:50.465889    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:49:50.465898    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:49:50.501956    4019 logs.go:123] Gathering logs for kube-controller-manager [4731b37cec82] ...
	I0814 09:49:50.501971    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4731b37cec82"
	I0814 09:49:50.519605    4019 logs.go:123] Gathering logs for storage-provisioner [d58abbf5afb4] ...
	I0814 09:49:50.519618    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58abbf5afb4"
	I0814 09:49:53.033483    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:49:53.005086    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:49:53.005319    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:49:53.034849    4033 logs.go:276] 2 containers: [351b37282b78 8cc17453c508]
	I0814 09:49:53.034925    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:49:53.049574    4033 logs.go:276] 2 containers: [9f25f6d89e86 03512631e6e0]
	I0814 09:49:53.049648    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:49:53.061523    4033 logs.go:276] 1 containers: [b4e8c942b977]
	I0814 09:49:53.061599    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:49:53.072595    4033 logs.go:276] 2 containers: [c4ee65912640 f85fd17e0cb2]
	I0814 09:49:53.072673    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:49:53.082853    4033 logs.go:276] 1 containers: [451ede5263eb]
	I0814 09:49:53.082920    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:49:53.093072    4033 logs.go:276] 2 containers: [2b8801fcf8bf 053e1a0d063c]
	I0814 09:49:53.093144    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:49:53.103494    4033 logs.go:276] 0 containers: []
	W0814 09:49:53.103507    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:49:53.103568    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:49:53.114040    4033 logs.go:276] 2 containers: [262a2b312a14 7c16146209d8]
	I0814 09:49:53.114059    4033 logs.go:123] Gathering logs for kube-scheduler [f85fd17e0cb2] ...
	I0814 09:49:53.114065    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f85fd17e0cb2"
	I0814 09:49:53.128069    4033 logs.go:123] Gathering logs for kube-controller-manager [2b8801fcf8bf] ...
	I0814 09:49:53.128080    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b8801fcf8bf"
	I0814 09:49:53.145256    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:49:53.145267    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:49:53.157260    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:49:53.157272    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:49:53.193936    4033 logs.go:123] Gathering logs for kube-apiserver [8cc17453c508] ...
	I0814 09:49:53.193951    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc17453c508"
	I0814 09:49:53.218323    4033 logs.go:123] Gathering logs for etcd [9f25f6d89e86] ...
	I0814 09:49:53.218334    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f25f6d89e86"
	I0814 09:49:53.232631    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:49:53.232642    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:49:53.269586    4033 logs.go:123] Gathering logs for coredns [b4e8c942b977] ...
	I0814 09:49:53.269594    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8c942b977"
	I0814 09:49:53.280451    4033 logs.go:123] Gathering logs for storage-provisioner [7c16146209d8] ...
	I0814 09:49:53.280459    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c16146209d8"
	I0814 09:49:53.291712    4033 logs.go:123] Gathering logs for storage-provisioner [262a2b312a14] ...
	I0814 09:49:53.291727    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262a2b312a14"
	I0814 09:49:53.307314    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:49:53.307324    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:49:53.332320    4033 logs.go:123] Gathering logs for etcd [03512631e6e0] ...
	I0814 09:49:53.332327    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03512631e6e0"
	I0814 09:49:53.347045    4033 logs.go:123] Gathering logs for kube-proxy [451ede5263eb] ...
	I0814 09:49:53.347056    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451ede5263eb"
	I0814 09:49:53.358824    4033 logs.go:123] Gathering logs for kube-controller-manager [053e1a0d063c] ...
	I0814 09:49:53.358834    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053e1a0d063c"
	I0814 09:49:53.370335    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:49:53.370345    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:49:53.374776    4033 logs.go:123] Gathering logs for kube-apiserver [351b37282b78] ...
	I0814 09:49:53.374783    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351b37282b78"
	I0814 09:49:53.396747    4033 logs.go:123] Gathering logs for kube-scheduler [c4ee65912640] ...
	I0814 09:49:53.396758    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4ee65912640"
	I0814 09:49:58.035609    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:49:58.036027    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:49:58.073365    4019 logs.go:276] 2 containers: [a0e97d1f2e98 7e949e3a70a3]
	I0814 09:49:58.073498    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:49:58.094510    4019 logs.go:276] 2 containers: [53f6a02861b1 1c40d2ec1695]
	I0814 09:49:58.094602    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:49:58.114559    4019 logs.go:276] 1 containers: [c1a47e5fa9ce]
	I0814 09:49:58.114634    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:49:58.126308    4019 logs.go:276] 2 containers: [e69eb145d498 9575eb1a63d7]
	I0814 09:49:58.126383    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:49:58.137090    4019 logs.go:276] 1 containers: [4289049e4ff9]
	I0814 09:49:58.137159    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:49:58.151796    4019 logs.go:276] 2 containers: [4731b37cec82 af86f8f14004]
	I0814 09:49:58.151862    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:49:58.162052    4019 logs.go:276] 0 containers: []
	W0814 09:49:58.162065    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:49:58.162122    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:49:58.172929    4019 logs.go:276] 2 containers: [d58abbf5afb4 3e7bc04c8fa4]
	I0814 09:49:58.172975    4019 logs.go:123] Gathering logs for storage-provisioner [3e7bc04c8fa4] ...
	I0814 09:49:58.172980    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e7bc04c8fa4"
	I0814 09:49:58.184431    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:49:58.184446    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:49:58.196353    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:49:58.196365    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:49:58.236517    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:49:58.236532    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:49:58.270821    4019 logs.go:123] Gathering logs for kube-apiserver [a0e97d1f2e98] ...
	I0814 09:49:58.270836    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e97d1f2e98"
	I0814 09:49:58.285142    4019 logs.go:123] Gathering logs for etcd [53f6a02861b1] ...
	I0814 09:49:58.285154    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53f6a02861b1"
	I0814 09:49:58.299177    4019 logs.go:123] Gathering logs for etcd [1c40d2ec1695] ...
	I0814 09:49:58.299187    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c40d2ec1695"
	I0814 09:49:58.313388    4019 logs.go:123] Gathering logs for kube-controller-manager [af86f8f14004] ...
	I0814 09:49:58.313400    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af86f8f14004"
	I0814 09:49:58.327187    4019 logs.go:123] Gathering logs for storage-provisioner [d58abbf5afb4] ...
	I0814 09:49:58.327199    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58abbf5afb4"
	I0814 09:49:58.340570    4019 logs.go:123] Gathering logs for kube-apiserver [7e949e3a70a3] ...
	I0814 09:49:58.340580    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e949e3a70a3"
	I0814 09:49:58.366597    4019 logs.go:123] Gathering logs for coredns [c1a47e5fa9ce] ...
	I0814 09:49:58.366607    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a47e5fa9ce"
	I0814 09:49:58.378105    4019 logs.go:123] Gathering logs for kube-scheduler [9575eb1a63d7] ...
	I0814 09:49:58.378115    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9575eb1a63d7"
	I0814 09:49:58.393232    4019 logs.go:123] Gathering logs for kube-controller-manager [4731b37cec82] ...
	I0814 09:49:58.393242    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4731b37cec82"
	I0814 09:49:58.411030    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:49:58.411044    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:49:58.434995    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:49:58.435002    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:49:58.438924    4019 logs.go:123] Gathering logs for kube-scheduler [e69eb145d498] ...
	I0814 09:49:58.438930    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e69eb145d498"
	I0814 09:49:58.450322    4019 logs.go:123] Gathering logs for kube-proxy [4289049e4ff9] ...
	I0814 09:49:58.450332    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4289049e4ff9"
	I0814 09:49:55.910213    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:50:00.972370    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:50:00.912875    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:50:00.913318    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:50:00.958878    4033 logs.go:276] 2 containers: [351b37282b78 8cc17453c508]
	I0814 09:50:00.958985    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:50:00.983721    4033 logs.go:276] 2 containers: [9f25f6d89e86 03512631e6e0]
	I0814 09:50:00.983802    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:50:00.997792    4033 logs.go:276] 1 containers: [b4e8c942b977]
	I0814 09:50:00.997871    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:50:01.019422    4033 logs.go:276] 2 containers: [c4ee65912640 f85fd17e0cb2]
	I0814 09:50:01.019491    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:50:01.032305    4033 logs.go:276] 1 containers: [451ede5263eb]
	I0814 09:50:01.032379    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:50:01.047374    4033 logs.go:276] 2 containers: [2b8801fcf8bf 053e1a0d063c]
	I0814 09:50:01.047441    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:50:01.058055    4033 logs.go:276] 0 containers: []
	W0814 09:50:01.058067    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:50:01.058126    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:50:01.069853    4033 logs.go:276] 2 containers: [262a2b312a14 7c16146209d8]
	I0814 09:50:01.069871    4033 logs.go:123] Gathering logs for kube-controller-manager [2b8801fcf8bf] ...
	I0814 09:50:01.069876    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b8801fcf8bf"
	I0814 09:50:01.087490    4033 logs.go:123] Gathering logs for storage-provisioner [262a2b312a14] ...
	I0814 09:50:01.087500    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262a2b312a14"
	I0814 09:50:01.102998    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:50:01.103011    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:50:01.128321    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:50:01.128336    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:50:01.162727    4033 logs.go:123] Gathering logs for etcd [9f25f6d89e86] ...
	I0814 09:50:01.162738    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f25f6d89e86"
	I0814 09:50:01.177966    4033 logs.go:123] Gathering logs for coredns [b4e8c942b977] ...
	I0814 09:50:01.177979    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8c942b977"
	I0814 09:50:01.190662    4033 logs.go:123] Gathering logs for kube-scheduler [f85fd17e0cb2] ...
	I0814 09:50:01.190674    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f85fd17e0cb2"
	I0814 09:50:01.213021    4033 logs.go:123] Gathering logs for kube-proxy [451ede5263eb] ...
	I0814 09:50:01.213037    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451ede5263eb"
	I0814 09:50:01.226746    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:50:01.226758    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:50:01.266671    4033 logs.go:123] Gathering logs for etcd [03512631e6e0] ...
	I0814 09:50:01.266682    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03512631e6e0"
	I0814 09:50:01.281216    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:50:01.281231    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:50:01.293271    4033 logs.go:123] Gathering logs for kube-apiserver [8cc17453c508] ...
	I0814 09:50:01.293285    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc17453c508"
	I0814 09:50:01.317167    4033 logs.go:123] Gathering logs for kube-controller-manager [053e1a0d063c] ...
	I0814 09:50:01.317177    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053e1a0d063c"
	I0814 09:50:01.328594    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:50:01.328605    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:50:01.333348    4033 logs.go:123] Gathering logs for kube-apiserver [351b37282b78] ...
	I0814 09:50:01.333357    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351b37282b78"
	I0814 09:50:01.347461    4033 logs.go:123] Gathering logs for kube-scheduler [c4ee65912640] ...
	I0814 09:50:01.347472    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4ee65912640"
	I0814 09:50:01.358672    4033 logs.go:123] Gathering logs for storage-provisioner [7c16146209d8] ...
	I0814 09:50:01.358682    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c16146209d8"
	I0814 09:50:03.872968    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:50:05.974464    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:50:05.974974    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:50:06.015446    4019 logs.go:276] 2 containers: [a0e97d1f2e98 7e949e3a70a3]
	I0814 09:50:06.015589    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:50:06.036997    4019 logs.go:276] 2 containers: [53f6a02861b1 1c40d2ec1695]
	I0814 09:50:06.037105    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:50:06.052488    4019 logs.go:276] 1 containers: [c1a47e5fa9ce]
	I0814 09:50:06.052570    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:50:06.065065    4019 logs.go:276] 2 containers: [e69eb145d498 9575eb1a63d7]
	I0814 09:50:06.065150    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:50:06.076210    4019 logs.go:276] 1 containers: [4289049e4ff9]
	I0814 09:50:06.076279    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:50:06.086942    4019 logs.go:276] 2 containers: [4731b37cec82 af86f8f14004]
	I0814 09:50:06.087017    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:50:06.104055    4019 logs.go:276] 0 containers: []
	W0814 09:50:06.104065    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:50:06.104123    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:50:06.115319    4019 logs.go:276] 2 containers: [d58abbf5afb4 3e7bc04c8fa4]
	I0814 09:50:06.115338    4019 logs.go:123] Gathering logs for kube-scheduler [e69eb145d498] ...
	I0814 09:50:06.115343    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e69eb145d498"
	I0814 09:50:06.127109    4019 logs.go:123] Gathering logs for kube-proxy [4289049e4ff9] ...
	I0814 09:50:06.127120    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4289049e4ff9"
	I0814 09:50:06.139397    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:50:06.139409    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:50:06.163699    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:50:06.163711    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:50:06.168546    4019 logs.go:123] Gathering logs for kube-apiserver [7e949e3a70a3] ...
	I0814 09:50:06.168553    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e949e3a70a3"
	I0814 09:50:06.197635    4019 logs.go:123] Gathering logs for etcd [1c40d2ec1695] ...
	I0814 09:50:06.197646    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c40d2ec1695"
	I0814 09:50:06.213490    4019 logs.go:123] Gathering logs for kube-apiserver [a0e97d1f2e98] ...
	I0814 09:50:06.213499    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e97d1f2e98"
	I0814 09:50:06.234810    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:50:06.234819    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:50:06.248715    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:50:06.248724    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:50:06.287168    4019 logs.go:123] Gathering logs for etcd [53f6a02861b1] ...
	I0814 09:50:06.287179    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53f6a02861b1"
	I0814 09:50:06.308170    4019 logs.go:123] Gathering logs for storage-provisioner [d58abbf5afb4] ...
	I0814 09:50:06.308180    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58abbf5afb4"
	I0814 09:50:06.319363    4019 logs.go:123] Gathering logs for kube-controller-manager [4731b37cec82] ...
	I0814 09:50:06.319376    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4731b37cec82"
	I0814 09:50:06.337642    4019 logs.go:123] Gathering logs for kube-controller-manager [af86f8f14004] ...
	I0814 09:50:06.337652    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af86f8f14004"
	I0814 09:50:06.351997    4019 logs.go:123] Gathering logs for storage-provisioner [3e7bc04c8fa4] ...
	I0814 09:50:06.352011    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e7bc04c8fa4"
	I0814 09:50:06.363508    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:50:06.363517    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:50:06.399163    4019 logs.go:123] Gathering logs for coredns [c1a47e5fa9ce] ...
	I0814 09:50:06.399178    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a47e5fa9ce"
	I0814 09:50:06.410844    4019 logs.go:123] Gathering logs for kube-scheduler [9575eb1a63d7] ...
	I0814 09:50:06.410855    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9575eb1a63d7"
	I0814 09:50:08.928179    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:50:08.875179    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:50:08.875608    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:50:08.912568    4033 logs.go:276] 2 containers: [351b37282b78 8cc17453c508]
	I0814 09:50:08.912711    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:50:08.937125    4033 logs.go:276] 2 containers: [9f25f6d89e86 03512631e6e0]
	I0814 09:50:08.937220    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:50:08.951709    4033 logs.go:276] 1 containers: [b4e8c942b977]
	I0814 09:50:08.951788    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:50:08.963917    4033 logs.go:276] 2 containers: [c4ee65912640 f85fd17e0cb2]
	I0814 09:50:08.963990    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:50:08.974140    4033 logs.go:276] 1 containers: [451ede5263eb]
	I0814 09:50:08.974220    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:50:08.984628    4033 logs.go:276] 2 containers: [2b8801fcf8bf 053e1a0d063c]
	I0814 09:50:08.984701    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:50:08.995164    4033 logs.go:276] 0 containers: []
	W0814 09:50:08.995175    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:50:08.995237    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:50:09.006156    4033 logs.go:276] 2 containers: [262a2b312a14 7c16146209d8]
	I0814 09:50:09.006174    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:50:09.006179    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:50:09.010534    4033 logs.go:123] Gathering logs for kube-scheduler [f85fd17e0cb2] ...
	I0814 09:50:09.010543    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f85fd17e0cb2"
	I0814 09:50:09.024723    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:50:09.024733    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:50:09.062917    4033 logs.go:123] Gathering logs for kube-scheduler [c4ee65912640] ...
	I0814 09:50:09.062934    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4ee65912640"
	I0814 09:50:09.074756    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:50:09.074770    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:50:09.087250    4033 logs.go:123] Gathering logs for kube-apiserver [351b37282b78] ...
	I0814 09:50:09.087266    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351b37282b78"
	I0814 09:50:09.102608    4033 logs.go:123] Gathering logs for etcd [9f25f6d89e86] ...
	I0814 09:50:09.102618    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f25f6d89e86"
	I0814 09:50:09.116761    4033 logs.go:123] Gathering logs for etcd [03512631e6e0] ...
	I0814 09:50:09.116777    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03512631e6e0"
	I0814 09:50:09.133300    4033 logs.go:123] Gathering logs for coredns [b4e8c942b977] ...
	I0814 09:50:09.133314    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8c942b977"
	I0814 09:50:09.144812    4033 logs.go:123] Gathering logs for kube-proxy [451ede5263eb] ...
	I0814 09:50:09.144824    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451ede5263eb"
	I0814 09:50:09.156288    4033 logs.go:123] Gathering logs for kube-controller-manager [2b8801fcf8bf] ...
	I0814 09:50:09.156300    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b8801fcf8bf"
	I0814 09:50:09.173392    4033 logs.go:123] Gathering logs for kube-controller-manager [053e1a0d063c] ...
	I0814 09:50:09.173401    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053e1a0d063c"
	I0814 09:50:09.186355    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:50:09.186365    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:50:09.221746    4033 logs.go:123] Gathering logs for kube-apiserver [8cc17453c508] ...
	I0814 09:50:09.221758    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc17453c508"
	I0814 09:50:09.246996    4033 logs.go:123] Gathering logs for storage-provisioner [262a2b312a14] ...
	I0814 09:50:09.247009    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262a2b312a14"
	I0814 09:50:09.259202    4033 logs.go:123] Gathering logs for storage-provisioner [7c16146209d8] ...
	I0814 09:50:09.259217    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c16146209d8"
	I0814 09:50:09.270692    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:50:09.270704    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:50:13.930338    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:50:13.930702    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:50:13.977475    4019 logs.go:276] 2 containers: [a0e97d1f2e98 7e949e3a70a3]
	I0814 09:50:13.977632    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:50:13.997387    4019 logs.go:276] 2 containers: [53f6a02861b1 1c40d2ec1695]
	I0814 09:50:13.997478    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:50:14.012258    4019 logs.go:276] 1 containers: [c1a47e5fa9ce]
	I0814 09:50:14.012326    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:50:14.024548    4019 logs.go:276] 2 containers: [e69eb145d498 9575eb1a63d7]
	I0814 09:50:14.024613    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:50:14.035782    4019 logs.go:276] 1 containers: [4289049e4ff9]
	I0814 09:50:14.035855    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:50:14.047253    4019 logs.go:276] 2 containers: [4731b37cec82 af86f8f14004]
	I0814 09:50:14.047319    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:50:14.057313    4019 logs.go:276] 0 containers: []
	W0814 09:50:14.057324    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:50:14.057387    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:50:14.067596    4019 logs.go:276] 2 containers: [d58abbf5afb4 3e7bc04c8fa4]
	I0814 09:50:14.067616    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:50:14.067623    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:50:14.071944    4019 logs.go:123] Gathering logs for kube-apiserver [7e949e3a70a3] ...
	I0814 09:50:14.071951    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e949e3a70a3"
	I0814 09:50:14.097009    4019 logs.go:123] Gathering logs for etcd [1c40d2ec1695] ...
	I0814 09:50:14.097023    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c40d2ec1695"
	I0814 09:50:14.114239    4019 logs.go:123] Gathering logs for kube-scheduler [9575eb1a63d7] ...
	I0814 09:50:14.114252    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9575eb1a63d7"
	I0814 09:50:14.134417    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:50:14.134427    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:50:14.172952    4019 logs.go:123] Gathering logs for coredns [c1a47e5fa9ce] ...
	I0814 09:50:14.172961    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a47e5fa9ce"
	I0814 09:50:14.184600    4019 logs.go:123] Gathering logs for storage-provisioner [d58abbf5afb4] ...
	I0814 09:50:14.184611    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58abbf5afb4"
	I0814 09:50:14.196308    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:50:14.196318    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:50:14.208167    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:50:14.208179    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:50:14.242693    4019 logs.go:123] Gathering logs for kube-apiserver [a0e97d1f2e98] ...
	I0814 09:50:14.242704    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e97d1f2e98"
	I0814 09:50:14.257016    4019 logs.go:123] Gathering logs for etcd [53f6a02861b1] ...
	I0814 09:50:14.257026    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53f6a02861b1"
	I0814 09:50:14.280197    4019 logs.go:123] Gathering logs for kube-scheduler [e69eb145d498] ...
	I0814 09:50:14.280209    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e69eb145d498"
	I0814 09:50:14.295057    4019 logs.go:123] Gathering logs for kube-proxy [4289049e4ff9] ...
	I0814 09:50:14.295069    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4289049e4ff9"
	I0814 09:50:14.312023    4019 logs.go:123] Gathering logs for kube-controller-manager [4731b37cec82] ...
	I0814 09:50:14.312037    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4731b37cec82"
	I0814 09:50:14.337407    4019 logs.go:123] Gathering logs for kube-controller-manager [af86f8f14004] ...
	I0814 09:50:14.337418    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af86f8f14004"
	I0814 09:50:14.355998    4019 logs.go:123] Gathering logs for storage-provisioner [3e7bc04c8fa4] ...
	I0814 09:50:14.356012    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e7bc04c8fa4"
	I0814 09:50:14.367139    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:50:14.367150    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:50:11.797506    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:50:16.893228    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:50:16.799722    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:50:16.799958    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:50:16.821650    4033 logs.go:276] 2 containers: [351b37282b78 8cc17453c508]
	I0814 09:50:16.821745    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:50:16.840217    4033 logs.go:276] 2 containers: [9f25f6d89e86 03512631e6e0]
	I0814 09:50:16.840286    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:50:16.852466    4033 logs.go:276] 1 containers: [b4e8c942b977]
	I0814 09:50:16.852541    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:50:16.869307    4033 logs.go:276] 2 containers: [c4ee65912640 f85fd17e0cb2]
	I0814 09:50:16.869379    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:50:16.884044    4033 logs.go:276] 1 containers: [451ede5263eb]
	I0814 09:50:16.884112    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:50:16.894885    4033 logs.go:276] 2 containers: [2b8801fcf8bf 053e1a0d063c]
	I0814 09:50:16.894947    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:50:16.904876    4033 logs.go:276] 0 containers: []
	W0814 09:50:16.904888    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:50:16.904950    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:50:16.915435    4033 logs.go:276] 2 containers: [262a2b312a14 7c16146209d8]
	I0814 09:50:16.915455    4033 logs.go:123] Gathering logs for storage-provisioner [262a2b312a14] ...
	I0814 09:50:16.915461    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262a2b312a14"
	I0814 09:50:16.927045    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:50:16.927056    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:50:16.939318    4033 logs.go:123] Gathering logs for etcd [9f25f6d89e86] ...
	I0814 09:50:16.939328    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f25f6d89e86"
	I0814 09:50:16.953095    4033 logs.go:123] Gathering logs for coredns [b4e8c942b977] ...
	I0814 09:50:16.953106    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8c942b977"
	I0814 09:50:16.963825    4033 logs.go:123] Gathering logs for kube-scheduler [c4ee65912640] ...
	I0814 09:50:16.963837    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4ee65912640"
	I0814 09:50:16.975360    4033 logs.go:123] Gathering logs for kube-proxy [451ede5263eb] ...
	I0814 09:50:16.975370    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451ede5263eb"
	I0814 09:50:16.986843    4033 logs.go:123] Gathering logs for kube-scheduler [f85fd17e0cb2] ...
	I0814 09:50:16.986853    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f85fd17e0cb2"
	I0814 09:50:17.000636    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:50:17.000646    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:50:17.025345    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:50:17.025352    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:50:17.062777    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:50:17.062785    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:50:17.067204    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:50:17.067211    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:50:17.102333    4033 logs.go:123] Gathering logs for kube-apiserver [8cc17453c508] ...
	I0814 09:50:17.102343    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc17453c508"
	I0814 09:50:17.125738    4033 logs.go:123] Gathering logs for kube-apiserver [351b37282b78] ...
	I0814 09:50:17.125749    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351b37282b78"
	I0814 09:50:17.141396    4033 logs.go:123] Gathering logs for kube-controller-manager [2b8801fcf8bf] ...
	I0814 09:50:17.141407    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b8801fcf8bf"
	I0814 09:50:17.160999    4033 logs.go:123] Gathering logs for kube-controller-manager [053e1a0d063c] ...
	I0814 09:50:17.161011    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053e1a0d063c"
	I0814 09:50:17.173270    4033 logs.go:123] Gathering logs for etcd [03512631e6e0] ...
	I0814 09:50:17.173281    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03512631e6e0"
	I0814 09:50:17.188289    4033 logs.go:123] Gathering logs for storage-provisioner [7c16146209d8] ...
	I0814 09:50:17.188298    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c16146209d8"
	I0814 09:50:19.701709    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:50:21.895238    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:50:21.895464    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:50:21.922299    4019 logs.go:276] 2 containers: [a0e97d1f2e98 7e949e3a70a3]
	I0814 09:50:21.922404    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:50:21.938054    4019 logs.go:276] 2 containers: [53f6a02861b1 1c40d2ec1695]
	I0814 09:50:21.938132    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:50:21.953720    4019 logs.go:276] 1 containers: [c1a47e5fa9ce]
	I0814 09:50:21.953787    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:50:21.964805    4019 logs.go:276] 2 containers: [e69eb145d498 9575eb1a63d7]
	I0814 09:50:21.964881    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:50:21.975412    4019 logs.go:276] 1 containers: [4289049e4ff9]
	I0814 09:50:21.975473    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:50:21.986268    4019 logs.go:276] 2 containers: [4731b37cec82 af86f8f14004]
	I0814 09:50:21.986330    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:50:21.996595    4019 logs.go:276] 0 containers: []
	W0814 09:50:21.996607    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:50:21.996665    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:50:22.013564    4019 logs.go:276] 2 containers: [d58abbf5afb4 3e7bc04c8fa4]
	I0814 09:50:22.013583    4019 logs.go:123] Gathering logs for kube-apiserver [7e949e3a70a3] ...
	I0814 09:50:22.013588    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e949e3a70a3"
	I0814 09:50:22.038365    4019 logs.go:123] Gathering logs for etcd [53f6a02861b1] ...
	I0814 09:50:22.038374    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53f6a02861b1"
	I0814 09:50:22.052804    4019 logs.go:123] Gathering logs for kube-proxy [4289049e4ff9] ...
	I0814 09:50:22.052813    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4289049e4ff9"
	I0814 09:50:22.069744    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:50:22.069756    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:50:22.093660    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:50:22.093670    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:50:22.105922    4019 logs.go:123] Gathering logs for kube-apiserver [a0e97d1f2e98] ...
	I0814 09:50:22.105933    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e97d1f2e98"
	I0814 09:50:22.119920    4019 logs.go:123] Gathering logs for kube-controller-manager [4731b37cec82] ...
	I0814 09:50:22.119932    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4731b37cec82"
	I0814 09:50:22.137063    4019 logs.go:123] Gathering logs for kube-controller-manager [af86f8f14004] ...
	I0814 09:50:22.137074    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af86f8f14004"
	I0814 09:50:22.151119    4019 logs.go:123] Gathering logs for storage-provisioner [d58abbf5afb4] ...
	I0814 09:50:22.151133    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58abbf5afb4"
	I0814 09:50:22.162962    4019 logs.go:123] Gathering logs for storage-provisioner [3e7bc04c8fa4] ...
	I0814 09:50:22.162975    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e7bc04c8fa4"
	I0814 09:50:22.181998    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:50:22.182008    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:50:22.216335    4019 logs.go:123] Gathering logs for coredns [c1a47e5fa9ce] ...
	I0814 09:50:22.216346    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a47e5fa9ce"
	I0814 09:50:22.227348    4019 logs.go:123] Gathering logs for kube-scheduler [e69eb145d498] ...
	I0814 09:50:22.227360    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e69eb145d498"
	I0814 09:50:22.238885    4019 logs.go:123] Gathering logs for kube-scheduler [9575eb1a63d7] ...
	I0814 09:50:22.238898    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9575eb1a63d7"
	I0814 09:50:22.254111    4019 logs.go:123] Gathering logs for etcd [1c40d2ec1695] ...
	I0814 09:50:22.254121    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c40d2ec1695"
	I0814 09:50:22.270695    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:50:22.270708    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:50:22.275499    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:50:22.275507    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:50:24.703835    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:50:24.703954    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:50:24.714859    4033 logs.go:276] 2 containers: [351b37282b78 8cc17453c508]
	I0814 09:50:24.714930    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:50:24.725623    4033 logs.go:276] 2 containers: [9f25f6d89e86 03512631e6e0]
	I0814 09:50:24.725685    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:50:24.735961    4033 logs.go:276] 1 containers: [b4e8c942b977]
	I0814 09:50:24.736037    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:50:24.746742    4033 logs.go:276] 2 containers: [c4ee65912640 f85fd17e0cb2]
	I0814 09:50:24.746803    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:50:24.757151    4033 logs.go:276] 1 containers: [451ede5263eb]
	I0814 09:50:24.757210    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:50:24.771443    4033 logs.go:276] 2 containers: [2b8801fcf8bf 053e1a0d063c]
	I0814 09:50:24.771521    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:50:24.782020    4033 logs.go:276] 0 containers: []
	W0814 09:50:24.782031    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:50:24.782089    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:50:24.794035    4033 logs.go:276] 2 containers: [262a2b312a14 7c16146209d8]
	I0814 09:50:24.794055    4033 logs.go:123] Gathering logs for storage-provisioner [7c16146209d8] ...
	I0814 09:50:24.794060    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c16146209d8"
	I0814 09:50:24.805044    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:50:24.805056    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:50:24.809265    4033 logs.go:123] Gathering logs for kube-apiserver [351b37282b78] ...
	I0814 09:50:24.809272    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351b37282b78"
	I0814 09:50:24.823521    4033 logs.go:123] Gathering logs for etcd [03512631e6e0] ...
	I0814 09:50:24.823530    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03512631e6e0"
	I0814 09:50:24.837790    4033 logs.go:123] Gathering logs for kube-scheduler [f85fd17e0cb2] ...
	I0814 09:50:24.837802    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f85fd17e0cb2"
	I0814 09:50:24.851705    4033 logs.go:123] Gathering logs for kube-proxy [451ede5263eb] ...
	I0814 09:50:24.851715    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451ede5263eb"
	I0814 09:50:24.863648    4033 logs.go:123] Gathering logs for storage-provisioner [262a2b312a14] ...
	I0814 09:50:24.863663    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262a2b312a14"
	I0814 09:50:24.875446    4033 logs.go:123] Gathering logs for kube-apiserver [8cc17453c508] ...
	I0814 09:50:24.875456    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc17453c508"
	I0814 09:50:24.899166    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:50:24.899178    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:50:24.922338    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:50:24.922346    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:50:24.958575    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:50:24.958583    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:50:24.994481    4033 logs.go:123] Gathering logs for kube-scheduler [c4ee65912640] ...
	I0814 09:50:24.994490    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4ee65912640"
	I0814 09:50:25.005731    4033 logs.go:123] Gathering logs for kube-controller-manager [2b8801fcf8bf] ...
	I0814 09:50:25.005743    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b8801fcf8bf"
	I0814 09:50:25.027177    4033 logs.go:123] Gathering logs for kube-controller-manager [053e1a0d063c] ...
	I0814 09:50:25.027190    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053e1a0d063c"
	I0814 09:50:25.038572    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:50:25.038587    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:50:25.050703    4033 logs.go:123] Gathering logs for etcd [9f25f6d89e86] ...
	I0814 09:50:25.050713    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f25f6d89e86"
	I0814 09:50:25.064633    4033 logs.go:123] Gathering logs for coredns [b4e8c942b977] ...
	I0814 09:50:25.064643    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8c942b977"
	I0814 09:50:24.812292    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:50:27.578246    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:50:29.814366    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:50:29.814756    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:50:29.845951    4019 logs.go:276] 2 containers: [a0e97d1f2e98 7e949e3a70a3]
	I0814 09:50:29.846077    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:50:29.864613    4019 logs.go:276] 2 containers: [53f6a02861b1 1c40d2ec1695]
	I0814 09:50:29.864718    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:50:29.878291    4019 logs.go:276] 1 containers: [c1a47e5fa9ce]
	I0814 09:50:29.878368    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:50:29.889897    4019 logs.go:276] 2 containers: [e69eb145d498 9575eb1a63d7]
	I0814 09:50:29.889974    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:50:29.900171    4019 logs.go:276] 1 containers: [4289049e4ff9]
	I0814 09:50:29.900239    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:50:29.910216    4019 logs.go:276] 2 containers: [4731b37cec82 af86f8f14004]
	I0814 09:50:29.910293    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:50:29.920721    4019 logs.go:276] 0 containers: []
	W0814 09:50:29.920733    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:50:29.920792    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:50:29.930807    4019 logs.go:276] 2 containers: [d58abbf5afb4 3e7bc04c8fa4]
	I0814 09:50:29.930823    4019 logs.go:123] Gathering logs for kube-apiserver [a0e97d1f2e98] ...
	I0814 09:50:29.930828    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e97d1f2e98"
	I0814 09:50:29.945151    4019 logs.go:123] Gathering logs for etcd [53f6a02861b1] ...
	I0814 09:50:29.945164    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53f6a02861b1"
	I0814 09:50:29.960337    4019 logs.go:123] Gathering logs for etcd [1c40d2ec1695] ...
	I0814 09:50:29.960348    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c40d2ec1695"
	I0814 09:50:29.975116    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:50:29.975127    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:50:30.011307    4019 logs.go:123] Gathering logs for kube-controller-manager [4731b37cec82] ...
	I0814 09:50:30.011318    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4731b37cec82"
	I0814 09:50:30.028963    4019 logs.go:123] Gathering logs for storage-provisioner [d58abbf5afb4] ...
	I0814 09:50:30.028976    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58abbf5afb4"
	I0814 09:50:30.042896    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:50:30.042907    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:50:30.055260    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:50:30.055272    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:50:30.059901    4019 logs.go:123] Gathering logs for kube-apiserver [7e949e3a70a3] ...
	I0814 09:50:30.059907    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e949e3a70a3"
	I0814 09:50:30.084892    4019 logs.go:123] Gathering logs for kube-scheduler [9575eb1a63d7] ...
	I0814 09:50:30.084902    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9575eb1a63d7"
	I0814 09:50:30.105205    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:50:30.105217    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:50:30.126917    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:50:30.126926    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:50:30.164203    4019 logs.go:123] Gathering logs for kube-scheduler [e69eb145d498] ...
	I0814 09:50:30.164213    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e69eb145d498"
	I0814 09:50:30.179356    4019 logs.go:123] Gathering logs for kube-proxy [4289049e4ff9] ...
	I0814 09:50:30.179367    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4289049e4ff9"
	I0814 09:50:30.191349    4019 logs.go:123] Gathering logs for kube-controller-manager [af86f8f14004] ...
	I0814 09:50:30.191363    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af86f8f14004"
	I0814 09:50:30.206144    4019 logs.go:123] Gathering logs for storage-provisioner [3e7bc04c8fa4] ...
	I0814 09:50:30.206158    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e7bc04c8fa4"
	I0814 09:50:30.224826    4019 logs.go:123] Gathering logs for coredns [c1a47e5fa9ce] ...
	I0814 09:50:30.224840    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a47e5fa9ce"
	I0814 09:50:32.736648    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:50:32.580409    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:50:32.580621    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:50:32.600557    4033 logs.go:276] 2 containers: [351b37282b78 8cc17453c508]
	I0814 09:50:32.600647    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:50:32.615198    4033 logs.go:276] 2 containers: [9f25f6d89e86 03512631e6e0]
	I0814 09:50:32.615277    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:50:32.628383    4033 logs.go:276] 1 containers: [b4e8c942b977]
	I0814 09:50:32.628454    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:50:32.638984    4033 logs.go:276] 2 containers: [c4ee65912640 f85fd17e0cb2]
	I0814 09:50:32.639042    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:50:32.649553    4033 logs.go:276] 1 containers: [451ede5263eb]
	I0814 09:50:32.649624    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:50:32.660598    4033 logs.go:276] 2 containers: [2b8801fcf8bf 053e1a0d063c]
	I0814 09:50:32.660656    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:50:32.670696    4033 logs.go:276] 0 containers: []
	W0814 09:50:32.670709    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:50:32.670770    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:50:32.681893    4033 logs.go:276] 2 containers: [262a2b312a14 7c16146209d8]
	I0814 09:50:32.681912    4033 logs.go:123] Gathering logs for kube-apiserver [8cc17453c508] ...
	I0814 09:50:32.681918    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc17453c508"
	I0814 09:50:32.707055    4033 logs.go:123] Gathering logs for coredns [b4e8c942b977] ...
	I0814 09:50:32.707067    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8c942b977"
	I0814 09:50:32.718163    4033 logs.go:123] Gathering logs for kube-controller-manager [2b8801fcf8bf] ...
	I0814 09:50:32.718176    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b8801fcf8bf"
	I0814 09:50:32.736547    4033 logs.go:123] Gathering logs for kube-apiserver [351b37282b78] ...
	I0814 09:50:32.736558    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351b37282b78"
	I0814 09:50:32.750646    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:50:32.750656    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:50:32.786051    4033 logs.go:123] Gathering logs for etcd [03512631e6e0] ...
	I0814 09:50:32.786066    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03512631e6e0"
	I0814 09:50:32.804895    4033 logs.go:123] Gathering logs for kube-scheduler [f85fd17e0cb2] ...
	I0814 09:50:32.804906    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f85fd17e0cb2"
	I0814 09:50:32.819035    4033 logs.go:123] Gathering logs for kube-proxy [451ede5263eb] ...
	I0814 09:50:32.819046    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451ede5263eb"
	I0814 09:50:32.830620    4033 logs.go:123] Gathering logs for kube-controller-manager [053e1a0d063c] ...
	I0814 09:50:32.830630    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053e1a0d063c"
	I0814 09:50:32.846245    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:50:32.846256    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:50:32.850877    4033 logs.go:123] Gathering logs for etcd [9f25f6d89e86] ...
	I0814 09:50:32.850884    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f25f6d89e86"
	I0814 09:50:32.864955    4033 logs.go:123] Gathering logs for storage-provisioner [262a2b312a14] ...
	I0814 09:50:32.864965    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262a2b312a14"
	I0814 09:50:32.876440    4033 logs.go:123] Gathering logs for storage-provisioner [7c16146209d8] ...
	I0814 09:50:32.876451    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c16146209d8"
	I0814 09:50:32.888927    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:50:32.888939    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:50:32.913197    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:50:32.913205    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:50:32.951072    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:50:32.951086    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:50:32.963445    4033 logs.go:123] Gathering logs for kube-scheduler [c4ee65912640] ...
	I0814 09:50:32.963456    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4ee65912640"
	I0814 09:50:35.477370    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:50:37.738627    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:50:37.738744    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:50:37.750382    4019 logs.go:276] 2 containers: [a0e97d1f2e98 7e949e3a70a3]
	I0814 09:50:37.750452    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:50:37.762089    4019 logs.go:276] 2 containers: [53f6a02861b1 1c40d2ec1695]
	I0814 09:50:37.762166    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:50:37.773225    4019 logs.go:276] 1 containers: [c1a47e5fa9ce]
	I0814 09:50:37.773305    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:50:37.784541    4019 logs.go:276] 2 containers: [e69eb145d498 9575eb1a63d7]
	I0814 09:50:37.784612    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:50:37.795543    4019 logs.go:276] 1 containers: [4289049e4ff9]
	I0814 09:50:37.795614    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:50:37.807317    4019 logs.go:276] 2 containers: [4731b37cec82 af86f8f14004]
	I0814 09:50:37.807392    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:50:37.818298    4019 logs.go:276] 0 containers: []
	W0814 09:50:37.818311    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:50:37.818376    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:50:37.829696    4019 logs.go:276] 2 containers: [d58abbf5afb4 3e7bc04c8fa4]
	I0814 09:50:37.829713    4019 logs.go:123] Gathering logs for kube-controller-manager [af86f8f14004] ...
	I0814 09:50:37.829719    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af86f8f14004"
	I0814 09:50:37.848876    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:50:37.848887    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:50:37.853614    4019 logs.go:123] Gathering logs for coredns [c1a47e5fa9ce] ...
	I0814 09:50:37.853622    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a47e5fa9ce"
	I0814 09:50:37.869099    4019 logs.go:123] Gathering logs for storage-provisioner [d58abbf5afb4] ...
	I0814 09:50:37.869111    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58abbf5afb4"
	I0814 09:50:37.881253    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:50:37.881266    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:50:37.906063    4019 logs.go:123] Gathering logs for etcd [53f6a02861b1] ...
	I0814 09:50:37.906077    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53f6a02861b1"
	I0814 09:50:37.920987    4019 logs.go:123] Gathering logs for kube-scheduler [9575eb1a63d7] ...
	I0814 09:50:37.920998    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9575eb1a63d7"
	I0814 09:50:37.936679    4019 logs.go:123] Gathering logs for kube-proxy [4289049e4ff9] ...
	I0814 09:50:37.936691    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4289049e4ff9"
	I0814 09:50:37.954988    4019 logs.go:123] Gathering logs for storage-provisioner [3e7bc04c8fa4] ...
	I0814 09:50:37.955004    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e7bc04c8fa4"
	I0814 09:50:37.967086    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:50:37.967098    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:50:37.980382    4019 logs.go:123] Gathering logs for kube-apiserver [a0e97d1f2e98] ...
	I0814 09:50:37.980393    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e97d1f2e98"
	I0814 09:50:37.995245    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:50:37.995257    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:50:38.032395    4019 logs.go:123] Gathering logs for kube-apiserver [7e949e3a70a3] ...
	I0814 09:50:38.032409    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e949e3a70a3"
	I0814 09:50:38.060777    4019 logs.go:123] Gathering logs for etcd [1c40d2ec1695] ...
	I0814 09:50:38.060791    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c40d2ec1695"
	I0814 09:50:38.075645    4019 logs.go:123] Gathering logs for kube-scheduler [e69eb145d498] ...
	I0814 09:50:38.075655    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e69eb145d498"
	I0814 09:50:38.087509    4019 logs.go:123] Gathering logs for kube-controller-manager [4731b37cec82] ...
	I0814 09:50:38.087520    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4731b37cec82"
	I0814 09:50:38.105727    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:50:38.105742    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:50:40.479410    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:50:40.479525    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:50:40.492259    4033 logs.go:276] 2 containers: [351b37282b78 8cc17453c508]
	I0814 09:50:40.492339    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:50:40.503538    4033 logs.go:276] 2 containers: [9f25f6d89e86 03512631e6e0]
	I0814 09:50:40.503615    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:50:40.514379    4033 logs.go:276] 1 containers: [b4e8c942b977]
	I0814 09:50:40.514446    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:50:40.525182    4033 logs.go:276] 2 containers: [c4ee65912640 f85fd17e0cb2]
	I0814 09:50:40.525252    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:50:40.535379    4033 logs.go:276] 1 containers: [451ede5263eb]
	I0814 09:50:40.535447    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:50:40.546044    4033 logs.go:276] 2 containers: [2b8801fcf8bf 053e1a0d063c]
	I0814 09:50:40.546126    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:50:40.556221    4033 logs.go:276] 0 containers: []
	W0814 09:50:40.556230    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:50:40.556284    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:50:40.570721    4033 logs.go:276] 2 containers: [262a2b312a14 7c16146209d8]
	I0814 09:50:40.570740    4033 logs.go:123] Gathering logs for kube-apiserver [351b37282b78] ...
	I0814 09:50:40.570745    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351b37282b78"
	I0814 09:50:40.584997    4033 logs.go:123] Gathering logs for coredns [b4e8c942b977] ...
	I0814 09:50:40.585007    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8c942b977"
	I0814 09:50:40.596587    4033 logs.go:123] Gathering logs for kube-scheduler [c4ee65912640] ...
	I0814 09:50:40.596598    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4ee65912640"
	I0814 09:50:40.609573    4033 logs.go:123] Gathering logs for storage-provisioner [7c16146209d8] ...
	I0814 09:50:40.609583    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c16146209d8"
	I0814 09:50:40.625957    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:50:40.625968    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:50:40.649935    4033 logs.go:123] Gathering logs for kube-proxy [451ede5263eb] ...
	I0814 09:50:40.649942    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451ede5263eb"
	I0814 09:50:40.644505    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:50:40.665553    4033 logs.go:123] Gathering logs for kube-controller-manager [053e1a0d063c] ...
	I0814 09:50:40.665562    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053e1a0d063c"
	I0814 09:50:40.676575    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:50:40.676594    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:50:40.713568    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:50:40.713579    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:50:40.718193    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:50:40.718199    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:50:40.752116    4033 logs.go:123] Gathering logs for kube-apiserver [8cc17453c508] ...
	I0814 09:50:40.752131    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc17453c508"
	I0814 09:50:40.776464    4033 logs.go:123] Gathering logs for etcd [03512631e6e0] ...
	I0814 09:50:40.776474    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03512631e6e0"
	I0814 09:50:40.791062    4033 logs.go:123] Gathering logs for kube-controller-manager [2b8801fcf8bf] ...
	I0814 09:50:40.791073    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b8801fcf8bf"
	I0814 09:50:40.808703    4033 logs.go:123] Gathering logs for etcd [9f25f6d89e86] ...
	I0814 09:50:40.808712    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f25f6d89e86"
	I0814 09:50:40.822153    4033 logs.go:123] Gathering logs for kube-scheduler [f85fd17e0cb2] ...
	I0814 09:50:40.822162    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f85fd17e0cb2"
	I0814 09:50:40.836507    4033 logs.go:123] Gathering logs for storage-provisioner [262a2b312a14] ...
	I0814 09:50:40.836522    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262a2b312a14"
	I0814 09:50:40.848545    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:50:40.848559    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:50:43.365019    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:50:45.646484    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:50:45.646727    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:50:45.663149    4019 logs.go:276] 2 containers: [a0e97d1f2e98 7e949e3a70a3]
	I0814 09:50:45.663239    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:50:45.675130    4019 logs.go:276] 2 containers: [53f6a02861b1 1c40d2ec1695]
	I0814 09:50:45.675204    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:50:45.687151    4019 logs.go:276] 1 containers: [c1a47e5fa9ce]
	I0814 09:50:45.687227    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:50:45.698051    4019 logs.go:276] 2 containers: [e69eb145d498 9575eb1a63d7]
	I0814 09:50:45.698122    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:50:45.708425    4019 logs.go:276] 1 containers: [4289049e4ff9]
	I0814 09:50:45.708496    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:50:45.718800    4019 logs.go:276] 2 containers: [4731b37cec82 af86f8f14004]
	I0814 09:50:45.718866    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:50:45.728939    4019 logs.go:276] 0 containers: []
	W0814 09:50:45.728952    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:50:45.729012    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:50:45.739100    4019 logs.go:276] 2 containers: [d58abbf5afb4 3e7bc04c8fa4]
	I0814 09:50:45.739118    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:50:45.739123    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:50:45.751055    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:50:45.751067    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:50:45.789625    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:50:45.789633    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:50:45.793758    4019 logs.go:123] Gathering logs for kube-apiserver [7e949e3a70a3] ...
	I0814 09:50:45.793766    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e949e3a70a3"
	I0814 09:50:45.818339    4019 logs.go:123] Gathering logs for etcd [53f6a02861b1] ...
	I0814 09:50:45.818350    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53f6a02861b1"
	I0814 09:50:45.832233    4019 logs.go:123] Gathering logs for etcd [1c40d2ec1695] ...
	I0814 09:50:45.832243    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c40d2ec1695"
	I0814 09:50:45.846783    4019 logs.go:123] Gathering logs for storage-provisioner [3e7bc04c8fa4] ...
	I0814 09:50:45.846793    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e7bc04c8fa4"
	I0814 09:50:45.858443    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:50:45.858456    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:50:45.892606    4019 logs.go:123] Gathering logs for coredns [c1a47e5fa9ce] ...
	I0814 09:50:45.892616    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a47e5fa9ce"
	I0814 09:50:45.904372    4019 logs.go:123] Gathering logs for kube-scheduler [e69eb145d498] ...
	I0814 09:50:45.904385    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e69eb145d498"
	I0814 09:50:45.916440    4019 logs.go:123] Gathering logs for kube-proxy [4289049e4ff9] ...
	I0814 09:50:45.916452    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4289049e4ff9"
	I0814 09:50:45.928135    4019 logs.go:123] Gathering logs for kube-controller-manager [af86f8f14004] ...
	I0814 09:50:45.928145    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af86f8f14004"
	I0814 09:50:45.942338    4019 logs.go:123] Gathering logs for kube-apiserver [a0e97d1f2e98] ...
	I0814 09:50:45.942352    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e97d1f2e98"
	I0814 09:50:45.957172    4019 logs.go:123] Gathering logs for kube-scheduler [9575eb1a63d7] ...
	I0814 09:50:45.957183    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9575eb1a63d7"
	I0814 09:50:45.971902    4019 logs.go:123] Gathering logs for storage-provisioner [d58abbf5afb4] ...
	I0814 09:50:45.971913    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58abbf5afb4"
	I0814 09:50:45.982971    4019 logs.go:123] Gathering logs for kube-controller-manager [4731b37cec82] ...
	I0814 09:50:45.982982    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4731b37cec82"
	I0814 09:50:46.000280    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:50:46.000290    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:50:48.524383    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:50:48.367493    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:50:48.367714    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:50:48.387383    4033 logs.go:276] 2 containers: [351b37282b78 8cc17453c508]
	I0814 09:50:48.387478    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:50:48.402680    4033 logs.go:276] 2 containers: [9f25f6d89e86 03512631e6e0]
	I0814 09:50:48.402760    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:50:48.417081    4033 logs.go:276] 1 containers: [b4e8c942b977]
	I0814 09:50:48.417157    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:50:48.427407    4033 logs.go:276] 2 containers: [c4ee65912640 f85fd17e0cb2]
	I0814 09:50:48.427473    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:50:48.438354    4033 logs.go:276] 1 containers: [451ede5263eb]
	I0814 09:50:48.438427    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:50:48.449785    4033 logs.go:276] 2 containers: [2b8801fcf8bf 053e1a0d063c]
	I0814 09:50:48.449858    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:50:48.460355    4033 logs.go:276] 0 containers: []
	W0814 09:50:48.460374    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:50:48.460437    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:50:48.470657    4033 logs.go:276] 2 containers: [262a2b312a14 7c16146209d8]
	I0814 09:50:48.470679    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:50:48.470685    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:50:48.475305    4033 logs.go:123] Gathering logs for kube-apiserver [351b37282b78] ...
	I0814 09:50:48.475311    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351b37282b78"
	I0814 09:50:48.497567    4033 logs.go:123] Gathering logs for kube-scheduler [c4ee65912640] ...
	I0814 09:50:48.497577    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4ee65912640"
	I0814 09:50:48.509731    4033 logs.go:123] Gathering logs for kube-controller-manager [053e1a0d063c] ...
	I0814 09:50:48.509747    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053e1a0d063c"
	I0814 09:50:48.524109    4033 logs.go:123] Gathering logs for kube-scheduler [f85fd17e0cb2] ...
	I0814 09:50:48.524123    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f85fd17e0cb2"
	I0814 09:50:48.543241    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:50:48.543256    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:50:48.557020    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:50:48.557031    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:50:48.593713    4033 logs.go:123] Gathering logs for etcd [9f25f6d89e86] ...
	I0814 09:50:48.593721    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f25f6d89e86"
	I0814 09:50:48.608207    4033 logs.go:123] Gathering logs for storage-provisioner [262a2b312a14] ...
	I0814 09:50:48.608217    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262a2b312a14"
	I0814 09:50:48.619939    4033 logs.go:123] Gathering logs for storage-provisioner [7c16146209d8] ...
	I0814 09:50:48.619948    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c16146209d8"
	I0814 09:50:48.631521    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:50:48.631533    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:50:48.656135    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:50:48.656143    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:50:48.690899    4033 logs.go:123] Gathering logs for kube-apiserver [8cc17453c508] ...
	I0814 09:50:48.690912    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc17453c508"
	I0814 09:50:48.714310    4033 logs.go:123] Gathering logs for etcd [03512631e6e0] ...
	I0814 09:50:48.714324    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03512631e6e0"
	I0814 09:50:48.728881    4033 logs.go:123] Gathering logs for coredns [b4e8c942b977] ...
	I0814 09:50:48.728891    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8c942b977"
	I0814 09:50:48.744987    4033 logs.go:123] Gathering logs for kube-proxy [451ede5263eb] ...
	I0814 09:50:48.745002    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451ede5263eb"
	I0814 09:50:48.756411    4033 logs.go:123] Gathering logs for kube-controller-manager [2b8801fcf8bf] ...
	I0814 09:50:48.756425    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b8801fcf8bf"
	I0814 09:50:53.526352    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:50:53.526561    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:50:53.550913    4019 logs.go:276] 2 containers: [a0e97d1f2e98 7e949e3a70a3]
	I0814 09:50:53.551015    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:50:53.566871    4019 logs.go:276] 2 containers: [53f6a02861b1 1c40d2ec1695]
	I0814 09:50:53.566953    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:50:53.579620    4019 logs.go:276] 1 containers: [c1a47e5fa9ce]
	I0814 09:50:53.579685    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:50:53.590857    4019 logs.go:276] 2 containers: [e69eb145d498 9575eb1a63d7]
	I0814 09:50:53.590923    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:50:53.601801    4019 logs.go:276] 1 containers: [4289049e4ff9]
	I0814 09:50:53.601881    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:50:53.616105    4019 logs.go:276] 2 containers: [4731b37cec82 af86f8f14004]
	I0814 09:50:53.616174    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:50:53.632881    4019 logs.go:276] 0 containers: []
	W0814 09:50:53.632893    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:50:53.632957    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:50:53.649021    4019 logs.go:276] 2 containers: [d58abbf5afb4 3e7bc04c8fa4]
	I0814 09:50:53.649037    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:50:53.649045    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:50:53.653253    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:50:53.653260    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:50:53.688165    4019 logs.go:123] Gathering logs for kube-apiserver [a0e97d1f2e98] ...
	I0814 09:50:53.688174    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e97d1f2e98"
	I0814 09:50:53.702646    4019 logs.go:123] Gathering logs for kube-scheduler [e69eb145d498] ...
	I0814 09:50:53.702658    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e69eb145d498"
	I0814 09:50:53.714115    4019 logs.go:123] Gathering logs for kube-proxy [4289049e4ff9] ...
	I0814 09:50:53.714128    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4289049e4ff9"
	I0814 09:50:53.725927    4019 logs.go:123] Gathering logs for storage-provisioner [3e7bc04c8fa4] ...
	I0814 09:50:53.725939    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e7bc04c8fa4"
	I0814 09:50:53.737420    4019 logs.go:123] Gathering logs for etcd [53f6a02861b1] ...
	I0814 09:50:53.737430    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53f6a02861b1"
	I0814 09:50:53.751519    4019 logs.go:123] Gathering logs for etcd [1c40d2ec1695] ...
	I0814 09:50:53.751532    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c40d2ec1695"
	I0814 09:50:53.766054    4019 logs.go:123] Gathering logs for coredns [c1a47e5fa9ce] ...
	I0814 09:50:53.766065    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a47e5fa9ce"
	I0814 09:50:53.777295    4019 logs.go:123] Gathering logs for storage-provisioner [d58abbf5afb4] ...
	I0814 09:50:53.777308    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58abbf5afb4"
	I0814 09:50:53.793123    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:50:53.793132    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:50:53.816147    4019 logs.go:123] Gathering logs for kube-apiserver [7e949e3a70a3] ...
	I0814 09:50:53.816154    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e949e3a70a3"
	I0814 09:50:53.845790    4019 logs.go:123] Gathering logs for kube-controller-manager [4731b37cec82] ...
	I0814 09:50:53.845800    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4731b37cec82"
	I0814 09:50:53.862603    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:50:53.862615    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:50:53.899451    4019 logs.go:123] Gathering logs for kube-scheduler [9575eb1a63d7] ...
	I0814 09:50:53.899461    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9575eb1a63d7"
	I0814 09:50:53.914461    4019 logs.go:123] Gathering logs for kube-controller-manager [af86f8f14004] ...
	I0814 09:50:53.914474    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af86f8f14004"
	I0814 09:50:53.928782    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:50:53.928795    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:50:51.274136    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:50:56.440998    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:50:56.276354    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:50:56.276549    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:50:56.289242    4033 logs.go:276] 2 containers: [351b37282b78 8cc17453c508]
	I0814 09:50:56.289320    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:50:56.299801    4033 logs.go:276] 2 containers: [9f25f6d89e86 03512631e6e0]
	I0814 09:50:56.299876    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:50:56.310208    4033 logs.go:276] 1 containers: [b4e8c942b977]
	I0814 09:50:56.310303    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:50:56.320657    4033 logs.go:276] 2 containers: [c4ee65912640 f85fd17e0cb2]
	I0814 09:50:56.320726    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:50:56.334138    4033 logs.go:276] 1 containers: [451ede5263eb]
	I0814 09:50:56.334217    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:50:56.346425    4033 logs.go:276] 2 containers: [2b8801fcf8bf 053e1a0d063c]
	I0814 09:50:56.346489    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:50:56.356655    4033 logs.go:276] 0 containers: []
	W0814 09:50:56.356665    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:50:56.356726    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:50:56.367482    4033 logs.go:276] 2 containers: [262a2b312a14 7c16146209d8]
	I0814 09:50:56.367500    4033 logs.go:123] Gathering logs for etcd [03512631e6e0] ...
	I0814 09:50:56.367506    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03512631e6e0"
	I0814 09:50:56.382347    4033 logs.go:123] Gathering logs for kube-controller-manager [2b8801fcf8bf] ...
	I0814 09:50:56.382360    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b8801fcf8bf"
	I0814 09:50:56.399600    4033 logs.go:123] Gathering logs for kube-controller-manager [053e1a0d063c] ...
	I0814 09:50:56.399611    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053e1a0d063c"
	I0814 09:50:56.421097    4033 logs.go:123] Gathering logs for storage-provisioner [262a2b312a14] ...
	I0814 09:50:56.421108    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262a2b312a14"
	I0814 09:50:56.432438    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:50:56.432452    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:50:56.471623    4033 logs.go:123] Gathering logs for kube-apiserver [8cc17453c508] ...
	I0814 09:50:56.471638    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc17453c508"
	I0814 09:50:56.498817    4033 logs.go:123] Gathering logs for etcd [9f25f6d89e86] ...
	I0814 09:50:56.498831    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f25f6d89e86"
	I0814 09:50:56.512862    4033 logs.go:123] Gathering logs for kube-scheduler [c4ee65912640] ...
	I0814 09:50:56.512874    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4ee65912640"
	I0814 09:50:56.524342    4033 logs.go:123] Gathering logs for kube-apiserver [351b37282b78] ...
	I0814 09:50:56.524352    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351b37282b78"
	I0814 09:50:56.537853    4033 logs.go:123] Gathering logs for kube-scheduler [f85fd17e0cb2] ...
	I0814 09:50:56.537865    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f85fd17e0cb2"
	I0814 09:50:56.551580    4033 logs.go:123] Gathering logs for kube-proxy [451ede5263eb] ...
	I0814 09:50:56.551594    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451ede5263eb"
	I0814 09:50:56.563112    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:50:56.563123    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:50:56.574818    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:50:56.574832    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:50:56.579277    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:50:56.579284    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:50:56.612818    4033 logs.go:123] Gathering logs for coredns [b4e8c942b977] ...
	I0814 09:50:56.612833    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8c942b977"
	I0814 09:50:56.624452    4033 logs.go:123] Gathering logs for storage-provisioner [7c16146209d8] ...
	I0814 09:50:56.624463    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c16146209d8"
	I0814 09:50:56.636416    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:50:56.636427    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:50:59.161002    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:51:01.443002    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:51:01.443250    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:51:01.457552    4019 logs.go:276] 2 containers: [a0e97d1f2e98 7e949e3a70a3]
	I0814 09:51:01.457637    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:51:01.469708    4019 logs.go:276] 2 containers: [53f6a02861b1 1c40d2ec1695]
	I0814 09:51:01.469788    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:51:01.480794    4019 logs.go:276] 1 containers: [c1a47e5fa9ce]
	I0814 09:51:01.480864    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:51:01.491434    4019 logs.go:276] 2 containers: [e69eb145d498 9575eb1a63d7]
	I0814 09:51:01.491507    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:51:01.503882    4019 logs.go:276] 1 containers: [4289049e4ff9]
	I0814 09:51:01.503952    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:51:01.514699    4019 logs.go:276] 2 containers: [4731b37cec82 af86f8f14004]
	I0814 09:51:01.514768    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:51:01.524684    4019 logs.go:276] 0 containers: []
	W0814 09:51:01.524693    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:51:01.524751    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:51:01.535127    4019 logs.go:276] 2 containers: [d58abbf5afb4 3e7bc04c8fa4]
	I0814 09:51:01.535146    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:51:01.535151    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:51:01.577794    4019 logs.go:123] Gathering logs for kube-apiserver [7e949e3a70a3] ...
	I0814 09:51:01.577808    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e949e3a70a3"
	I0814 09:51:01.609980    4019 logs.go:123] Gathering logs for kube-scheduler [e69eb145d498] ...
	I0814 09:51:01.609994    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e69eb145d498"
	I0814 09:51:01.621844    4019 logs.go:123] Gathering logs for kube-scheduler [9575eb1a63d7] ...
	I0814 09:51:01.621854    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9575eb1a63d7"
	I0814 09:51:01.641019    4019 logs.go:123] Gathering logs for storage-provisioner [d58abbf5afb4] ...
	I0814 09:51:01.641029    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58abbf5afb4"
	I0814 09:51:01.654077    4019 logs.go:123] Gathering logs for storage-provisioner [3e7bc04c8fa4] ...
	I0814 09:51:01.654087    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e7bc04c8fa4"
	I0814 09:51:01.666196    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:51:01.666208    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:51:01.670904    4019 logs.go:123] Gathering logs for kube-proxy [4289049e4ff9] ...
	I0814 09:51:01.670911    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4289049e4ff9"
	I0814 09:51:01.684926    4019 logs.go:123] Gathering logs for kube-controller-manager [4731b37cec82] ...
	I0814 09:51:01.684937    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4731b37cec82"
	I0814 09:51:01.701880    4019 logs.go:123] Gathering logs for kube-controller-manager [af86f8f14004] ...
	I0814 09:51:01.701890    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af86f8f14004"
	I0814 09:51:01.716337    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:51:01.716348    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:51:01.729123    4019 logs.go:123] Gathering logs for etcd [1c40d2ec1695] ...
	I0814 09:51:01.729134    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c40d2ec1695"
	I0814 09:51:01.743714    4019 logs.go:123] Gathering logs for coredns [c1a47e5fa9ce] ...
	I0814 09:51:01.743724    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a47e5fa9ce"
	I0814 09:51:01.756087    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:51:01.756098    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:51:01.778696    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:51:01.778704    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:51:01.815937    4019 logs.go:123] Gathering logs for kube-apiserver [a0e97d1f2e98] ...
	I0814 09:51:01.815948    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e97d1f2e98"
	I0814 09:51:01.830448    4019 logs.go:123] Gathering logs for etcd [53f6a02861b1] ...
	I0814 09:51:01.830462    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53f6a02861b1"
	I0814 09:51:04.344874    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:51:04.163370    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:51:04.163574    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:51:04.177972    4033 logs.go:276] 2 containers: [351b37282b78 8cc17453c508]
	I0814 09:51:04.178051    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:51:04.190087    4033 logs.go:276] 2 containers: [9f25f6d89e86 03512631e6e0]
	I0814 09:51:04.190164    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:51:04.201609    4033 logs.go:276] 1 containers: [b4e8c942b977]
	I0814 09:51:04.201678    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:51:04.212280    4033 logs.go:276] 2 containers: [c4ee65912640 f85fd17e0cb2]
	I0814 09:51:04.212359    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:51:04.238804    4033 logs.go:276] 1 containers: [451ede5263eb]
	I0814 09:51:04.238889    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:51:04.256209    4033 logs.go:276] 2 containers: [2b8801fcf8bf 053e1a0d063c]
	I0814 09:51:04.256289    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:51:04.269666    4033 logs.go:276] 0 containers: []
	W0814 09:51:04.269677    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:51:04.269736    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:51:04.280376    4033 logs.go:276] 2 containers: [262a2b312a14 7c16146209d8]
	I0814 09:51:04.280394    4033 logs.go:123] Gathering logs for storage-provisioner [262a2b312a14] ...
	I0814 09:51:04.280400    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262a2b312a14"
	I0814 09:51:04.291698    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:51:04.291709    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:51:04.315388    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:51:04.315398    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:51:04.319614    4033 logs.go:123] Gathering logs for etcd [03512631e6e0] ...
	I0814 09:51:04.319624    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03512631e6e0"
	I0814 09:51:04.334317    4033 logs.go:123] Gathering logs for kube-scheduler [f85fd17e0cb2] ...
	I0814 09:51:04.334329    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f85fd17e0cb2"
	I0814 09:51:04.348231    4033 logs.go:123] Gathering logs for kube-proxy [451ede5263eb] ...
	I0814 09:51:04.348238    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451ede5263eb"
	I0814 09:51:04.360628    4033 logs.go:123] Gathering logs for storage-provisioner [7c16146209d8] ...
	I0814 09:51:04.360644    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c16146209d8"
	I0814 09:51:04.372462    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:51:04.372476    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:51:04.410786    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:51:04.410799    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:51:04.445526    4033 logs.go:123] Gathering logs for kube-apiserver [8cc17453c508] ...
	I0814 09:51:04.445540    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc17453c508"
	I0814 09:51:04.469452    4033 logs.go:123] Gathering logs for etcd [9f25f6d89e86] ...
	I0814 09:51:04.469464    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f25f6d89e86"
	I0814 09:51:04.483055    4033 logs.go:123] Gathering logs for coredns [b4e8c942b977] ...
	I0814 09:51:04.483068    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8c942b977"
	I0814 09:51:04.495450    4033 logs.go:123] Gathering logs for kube-controller-manager [2b8801fcf8bf] ...
	I0814 09:51:04.495462    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b8801fcf8bf"
	I0814 09:51:04.512073    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:51:04.512085    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:51:04.530013    4033 logs.go:123] Gathering logs for kube-apiserver [351b37282b78] ...
	I0814 09:51:04.530024    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351b37282b78"
	I0814 09:51:04.545689    4033 logs.go:123] Gathering logs for kube-scheduler [c4ee65912640] ...
	I0814 09:51:04.545701    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4ee65912640"
	I0814 09:51:04.557655    4033 logs.go:123] Gathering logs for kube-controller-manager [053e1a0d063c] ...
	I0814 09:51:04.557666    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053e1a0d063c"
	I0814 09:51:09.345318    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:51:09.345542    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:51:09.368641    4019 logs.go:276] 2 containers: [a0e97d1f2e98 7e949e3a70a3]
	I0814 09:51:09.368746    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:51:09.383986    4019 logs.go:276] 2 containers: [53f6a02861b1 1c40d2ec1695]
	I0814 09:51:09.384062    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:51:09.401071    4019 logs.go:276] 1 containers: [c1a47e5fa9ce]
	I0814 09:51:09.401133    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:51:09.411486    4019 logs.go:276] 2 containers: [e69eb145d498 9575eb1a63d7]
	I0814 09:51:09.411557    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:51:09.421561    4019 logs.go:276] 1 containers: [4289049e4ff9]
	I0814 09:51:09.421630    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:51:09.432212    4019 logs.go:276] 2 containers: [4731b37cec82 af86f8f14004]
	I0814 09:51:09.432278    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:51:09.442072    4019 logs.go:276] 0 containers: []
	W0814 09:51:09.442082    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:51:09.442134    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:51:09.453034    4019 logs.go:276] 2 containers: [d58abbf5afb4 3e7bc04c8fa4]
	I0814 09:51:09.453051    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:51:09.453057    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:51:07.071046    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:51:09.489301    4019 logs.go:123] Gathering logs for kube-apiserver [7e949e3a70a3] ...
	I0814 09:51:09.489312    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e949e3a70a3"
	I0814 09:51:09.514296    4019 logs.go:123] Gathering logs for etcd [1c40d2ec1695] ...
	I0814 09:51:09.514308    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c40d2ec1695"
	I0814 09:51:09.529159    4019 logs.go:123] Gathering logs for kube-scheduler [e69eb145d498] ...
	I0814 09:51:09.529169    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e69eb145d498"
	I0814 09:51:09.540677    4019 logs.go:123] Gathering logs for kube-controller-manager [af86f8f14004] ...
	I0814 09:51:09.540686    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af86f8f14004"
	I0814 09:51:09.554286    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:51:09.554296    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:51:09.558969    4019 logs.go:123] Gathering logs for kube-apiserver [a0e97d1f2e98] ...
	I0814 09:51:09.558976    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e97d1f2e98"
	I0814 09:51:09.591916    4019 logs.go:123] Gathering logs for coredns [c1a47e5fa9ce] ...
	I0814 09:51:09.591929    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a47e5fa9ce"
	I0814 09:51:09.611033    4019 logs.go:123] Gathering logs for kube-proxy [4289049e4ff9] ...
	I0814 09:51:09.611046    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4289049e4ff9"
	I0814 09:51:09.626911    4019 logs.go:123] Gathering logs for storage-provisioner [d58abbf5afb4] ...
	I0814 09:51:09.626922    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58abbf5afb4"
	I0814 09:51:09.642765    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:51:09.642776    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:51:09.655159    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:51:09.655170    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:51:09.694083    4019 logs.go:123] Gathering logs for etcd [53f6a02861b1] ...
	I0814 09:51:09.694095    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53f6a02861b1"
	I0814 09:51:09.708185    4019 logs.go:123] Gathering logs for kube-controller-manager [4731b37cec82] ...
	I0814 09:51:09.708196    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4731b37cec82"
	I0814 09:51:09.725871    4019 logs.go:123] Gathering logs for kube-scheduler [9575eb1a63d7] ...
	I0814 09:51:09.725881    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9575eb1a63d7"
	I0814 09:51:09.740312    4019 logs.go:123] Gathering logs for storage-provisioner [3e7bc04c8fa4] ...
	I0814 09:51:09.740322    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e7bc04c8fa4"
	I0814 09:51:09.752014    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:51:09.752027    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:51:12.275797    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:51:12.073603    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:51:12.074020    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:51:12.107079    4033 logs.go:276] 2 containers: [351b37282b78 8cc17453c508]
	I0814 09:51:12.107207    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:51:12.128174    4033 logs.go:276] 2 containers: [9f25f6d89e86 03512631e6e0]
	I0814 09:51:12.128273    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:51:12.143576    4033 logs.go:276] 1 containers: [b4e8c942b977]
	I0814 09:51:12.143658    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:51:12.161749    4033 logs.go:276] 2 containers: [c4ee65912640 f85fd17e0cb2]
	I0814 09:51:12.161828    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:51:12.174781    4033 logs.go:276] 1 containers: [451ede5263eb]
	I0814 09:51:12.174853    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:51:12.185684    4033 logs.go:276] 2 containers: [2b8801fcf8bf 053e1a0d063c]
	I0814 09:51:12.185758    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:51:12.196639    4033 logs.go:276] 0 containers: []
	W0814 09:51:12.196652    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:51:12.196714    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:51:12.207105    4033 logs.go:276] 2 containers: [262a2b312a14 7c16146209d8]
	I0814 09:51:12.207127    4033 logs.go:123] Gathering logs for kube-apiserver [351b37282b78] ...
	I0814 09:51:12.207133    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351b37282b78"
	I0814 09:51:12.221337    4033 logs.go:123] Gathering logs for etcd [03512631e6e0] ...
	I0814 09:51:12.221347    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03512631e6e0"
	I0814 09:51:12.235509    4033 logs.go:123] Gathering logs for kube-scheduler [f85fd17e0cb2] ...
	I0814 09:51:12.235519    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f85fd17e0cb2"
	I0814 09:51:12.249697    4033 logs.go:123] Gathering logs for kube-controller-manager [2b8801fcf8bf] ...
	I0814 09:51:12.249708    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b8801fcf8bf"
	I0814 09:51:12.266530    4033 logs.go:123] Gathering logs for storage-provisioner [262a2b312a14] ...
	I0814 09:51:12.266540    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262a2b312a14"
	I0814 09:51:12.278123    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:51:12.278134    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:51:12.316962    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:51:12.316970    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:51:12.352120    4033 logs.go:123] Gathering logs for kube-apiserver [8cc17453c508] ...
	I0814 09:51:12.352130    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc17453c508"
	I0814 09:51:12.376188    4033 logs.go:123] Gathering logs for etcd [9f25f6d89e86] ...
	I0814 09:51:12.376200    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f25f6d89e86"
	I0814 09:51:12.394234    4033 logs.go:123] Gathering logs for kube-scheduler [c4ee65912640] ...
	I0814 09:51:12.394245    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4ee65912640"
	I0814 09:51:12.406180    4033 logs.go:123] Gathering logs for kube-proxy [451ede5263eb] ...
	I0814 09:51:12.406191    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451ede5263eb"
	I0814 09:51:12.422608    4033 logs.go:123] Gathering logs for storage-provisioner [7c16146209d8] ...
	I0814 09:51:12.422618    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c16146209d8"
	I0814 09:51:12.433698    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:51:12.433709    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:51:12.438671    4033 logs.go:123] Gathering logs for coredns [b4e8c942b977] ...
	I0814 09:51:12.438676    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8c942b977"
	I0814 09:51:12.450096    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:51:12.450108    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:51:12.474132    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:51:12.474140    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:51:12.487777    4033 logs.go:123] Gathering logs for kube-controller-manager [053e1a0d063c] ...
	I0814 09:51:12.487788    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053e1a0d063c"
	I0814 09:51:15.001837    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:51:17.277790    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:51:17.277873    4019 kubeadm.go:597] duration metric: took 4m3.831372375s to restartPrimaryControlPlane
	W0814 09:51:17.277935    4019 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0814 09:51:17.277968    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0814 09:51:18.320255    4019 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.04231625s)
	I0814 09:51:18.320314    4019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 09:51:18.325372    4019 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 09:51:18.328230    4019 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 09:51:18.330864    4019 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 09:51:18.330869    4019 kubeadm.go:157] found existing configuration files:
	
	I0814 09:51:18.330893    4019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50269 /etc/kubernetes/admin.conf
	I0814 09:51:18.333669    4019 kubeadm.go:163] "https://control-plane.minikube.internal:50269" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50269 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 09:51:18.333697    4019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 09:51:18.336722    4019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50269 /etc/kubernetes/kubelet.conf
	I0814 09:51:18.339302    4019 kubeadm.go:163] "https://control-plane.minikube.internal:50269" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50269 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 09:51:18.339326    4019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 09:51:18.341993    4019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50269 /etc/kubernetes/controller-manager.conf
	I0814 09:51:18.344970    4019 kubeadm.go:163] "https://control-plane.minikube.internal:50269" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50269 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 09:51:18.344992    4019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 09:51:18.347657    4019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50269 /etc/kubernetes/scheduler.conf
	I0814 09:51:18.350231    4019 kubeadm.go:163] "https://control-plane.minikube.internal:50269" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50269 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 09:51:18.350257    4019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 09:51:18.353619    4019 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 09:51:18.372333    4019 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0814 09:51:18.372422    4019 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 09:51:18.429445    4019 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 09:51:18.429536    4019 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 09:51:18.429696    4019 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0814 09:51:18.479796    4019 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 09:51:18.483092    4019 out.go:204]   - Generating certificates and keys ...
	I0814 09:51:18.483186    4019 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 09:51:18.483278    4019 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 09:51:18.483362    4019 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 09:51:18.483396    4019 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 09:51:18.483428    4019 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 09:51:18.483455    4019 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 09:51:18.483490    4019 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 09:51:18.483519    4019 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 09:51:18.483553    4019 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 09:51:18.483586    4019 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 09:51:18.483616    4019 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 09:51:18.483659    4019 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 09:51:18.573409    4019 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 09:51:18.733343    4019 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 09:51:18.892816    4019 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 09:51:18.953320    4019 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 09:51:18.982385    4019 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 09:51:18.982785    4019 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 09:51:18.982850    4019 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 09:51:19.067085    4019 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 09:51:19.071291    4019 out.go:204]   - Booting up control plane ...
	I0814 09:51:19.071348    4019 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 09:51:19.071402    4019 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 09:51:19.071453    4019 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 09:51:19.071497    4019 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 09:51:19.071637    4019 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0814 09:51:20.002990    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:51:20.003022    4033 kubeadm.go:597] duration metric: took 4m4.397822792s to restartPrimaryControlPlane
	W0814 09:51:20.003052    4033 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0814 09:51:20.003067    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0814 09:51:21.061708    4033 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.058676959s)
	I0814 09:51:21.061793    4033 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 09:51:21.068216    4033 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 09:51:21.071063    4033 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 09:51:21.074407    4033 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 09:51:21.074412    4033 kubeadm.go:157] found existing configuration files:
	
	I0814 09:51:21.074439    4033 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50343 /etc/kubernetes/admin.conf
	I0814 09:51:21.077531    4033 kubeadm.go:163] "https://control-plane.minikube.internal:50343" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50343 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 09:51:21.077554    4033 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 09:51:21.080104    4033 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50343 /etc/kubernetes/kubelet.conf
	I0814 09:51:21.082721    4033 kubeadm.go:163] "https://control-plane.minikube.internal:50343" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50343 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 09:51:21.082748    4033 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 09:51:21.085760    4033 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50343 /etc/kubernetes/controller-manager.conf
	I0814 09:51:21.088404    4033 kubeadm.go:163] "https://control-plane.minikube.internal:50343" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50343 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 09:51:21.088426    4033 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 09:51:21.090840    4033 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50343 /etc/kubernetes/scheduler.conf
	I0814 09:51:21.093684    4033 kubeadm.go:163] "https://control-plane.minikube.internal:50343" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50343 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 09:51:21.093707    4033 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 09:51:21.096538    4033 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 09:51:21.114365    4033 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0814 09:51:21.114393    4033 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 09:51:21.164031    4033 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 09:51:21.164088    4033 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 09:51:21.164156    4033 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0814 09:51:21.213693    4033 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 09:51:21.221851    4033 out.go:204]   - Generating certificates and keys ...
	I0814 09:51:21.221887    4033 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 09:51:21.221916    4033 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 09:51:21.221960    4033 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 09:51:21.221995    4033 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 09:51:21.222036    4033 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 09:51:21.222083    4033 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 09:51:21.222121    4033 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 09:51:21.222154    4033 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 09:51:21.222187    4033 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 09:51:21.222224    4033 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 09:51:21.222242    4033 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 09:51:21.222277    4033 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 09:51:21.462741    4033 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 09:51:21.646629    4033 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 09:51:21.783046    4033 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 09:51:21.958814    4033 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 09:51:21.988432    4033 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 09:51:21.988816    4033 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 09:51:21.988838    4033 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 09:51:22.075930    4033 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 09:51:24.076851    4019 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.005281 seconds
	I0814 09:51:24.077054    4019 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0814 09:51:24.081155    4019 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0814 09:51:24.591967    4019 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0814 09:51:24.592072    4019 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-996000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0814 09:51:25.096037    4019 kubeadm.go:310] [bootstrap-token] Using token: aevy0w.o3qfcbxlyi7dbsuv
	I0814 09:51:25.102432    4019 out.go:204]   - Configuring RBAC rules ...
	I0814 09:51:25.102493    4019 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0814 09:51:25.102537    4019 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0814 09:51:25.107374    4019 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0814 09:51:25.108337    4019 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0814 09:51:25.108998    4019 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0814 09:51:25.109833    4019 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0814 09:51:25.113023    4019 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0814 09:51:25.273377    4019 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0814 09:51:25.501564    4019 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0814 09:51:25.502136    4019 kubeadm.go:310] 
	I0814 09:51:25.502169    4019 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0814 09:51:25.502175    4019 kubeadm.go:310] 
	I0814 09:51:25.502219    4019 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0814 09:51:25.502223    4019 kubeadm.go:310] 
	I0814 09:51:25.502244    4019 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0814 09:51:25.502282    4019 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0814 09:51:25.502322    4019 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0814 09:51:25.502328    4019 kubeadm.go:310] 
	I0814 09:51:25.502358    4019 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0814 09:51:25.502376    4019 kubeadm.go:310] 
	I0814 09:51:25.502412    4019 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0814 09:51:25.502415    4019 kubeadm.go:310] 
	I0814 09:51:25.502457    4019 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0814 09:51:25.502506    4019 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0814 09:51:25.502555    4019 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0814 09:51:25.502559    4019 kubeadm.go:310] 
	I0814 09:51:25.502608    4019 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0814 09:51:25.502647    4019 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0814 09:51:25.502650    4019 kubeadm.go:310] 
	I0814 09:51:25.502696    4019 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token aevy0w.o3qfcbxlyi7dbsuv \
	I0814 09:51:25.502758    4019 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6bc1bdbbe167ab66a20d6bf1c306e986530a9d0fee84c418f91e1b4312d4e260 \
	I0814 09:51:25.502770    4019 kubeadm.go:310] 	--control-plane 
	I0814 09:51:25.502775    4019 kubeadm.go:310] 
	I0814 09:51:25.502863    4019 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0814 09:51:25.502866    4019 kubeadm.go:310] 
	I0814 09:51:25.502913    4019 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token aevy0w.o3qfcbxlyi7dbsuv \
	I0814 09:51:25.502969    4019 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6bc1bdbbe167ab66a20d6bf1c306e986530a9d0fee84c418f91e1b4312d4e260 
	I0814 09:51:25.503102    4019 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 09:51:25.503144    4019 cni.go:84] Creating CNI manager for ""
	I0814 09:51:25.503154    4019 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0814 09:51:25.509827    4019 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 09:51:25.513939    4019 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 09:51:25.517743    4019 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 09:51:25.522859    4019 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 09:51:25.522910    4019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:51:25.522911    4019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-996000 minikube.k8s.io/updated_at=2024_08_14T09_51_25_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35 minikube.k8s.io/name=stopped-upgrade-996000 minikube.k8s.io/primary=true
	I0814 09:51:25.565547    4019 kubeadm.go:1113] duration metric: took 42.675833ms to wait for elevateKubeSystemPrivileges
	I0814 09:51:25.565579    4019 ops.go:34] apiserver oom_adj: -16
	I0814 09:51:25.565585    4019 kubeadm.go:394] duration metric: took 4m12.1358215s to StartCluster
	I0814 09:51:25.565596    4019 settings.go:142] acquiring lock: {Name:mk45b0aba98bc9a80a7cc9e2d664f69dcf74de9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:51:25.565691    4019 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19446-1067/kubeconfig
	I0814 09:51:25.566084    4019 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19446-1067/kubeconfig: {Name:mkd5271b15535f495ab8e34d870e7dbcadc9c40a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:51:25.566278    4019 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0814 09:51:25.566297    4019 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0814 09:51:25.566337    4019 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-996000"
	I0814 09:51:25.566391    4019 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-996000"
	I0814 09:51:25.566390    4019 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-996000"
	W0814 09:51:25.566398    4019 addons.go:243] addon storage-provisioner should already be in state true
	I0814 09:51:25.566406    4019 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-996000"
	I0814 09:51:25.566410    4019 host.go:66] Checking if "stopped-upgrade-996000" exists ...
	I0814 09:51:25.566364    4019 config.go:182] Loaded profile config "stopped-upgrade-996000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0814 09:51:25.567336    4019 kapi.go:59] client config for stopped-upgrade-996000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/stopped-upgrade-996000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/stopped-upgrade-996000/client.key", CAFile:"/Users/jenkins/minikube-integration/19446-1067/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102907e30), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0814 09:51:25.567454    4019 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-996000"
	W0814 09:51:25.567459    4019 addons.go:243] addon default-storageclass should already be in state true
	I0814 09:51:25.567465    4019 host.go:66] Checking if "stopped-upgrade-996000" exists ...
	I0814 09:51:25.569902    4019 out.go:177] * Verifying Kubernetes components...
	I0814 09:51:25.570239    4019 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 09:51:25.573023    4019 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 09:51:25.573030    4019 sshutil.go:53] new ssh client: &{IP:localhost Port:50234 SSHKeyPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/stopped-upgrade-996000/id_rsa Username:docker}
	I0814 09:51:25.573882    4019 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 09:51:22.079138    4033 out.go:204]   - Booting up control plane ...
	I0814 09:51:22.079188    4033 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 09:51:22.079229    4033 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 09:51:22.079261    4033 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 09:51:22.079305    4033 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 09:51:22.079427    4033 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0814 09:51:26.078878    4033 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.001782 seconds
	I0814 09:51:26.078935    4033 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0814 09:51:26.083779    4033 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0814 09:51:26.593196    4033 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0814 09:51:26.593349    4033 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-579000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0814 09:51:27.096748    4033 kubeadm.go:310] [bootstrap-token] Using token: jgb9at.yikhjz5w53wfghkv
	I0814 09:51:27.105140    4033 out.go:204]   - Configuring RBAC rules ...
	I0814 09:51:27.105200    4033 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0814 09:51:27.105246    4033 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0814 09:51:27.106043    4033 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0814 09:51:27.107069    4033 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0814 09:51:27.107968    4033 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0814 09:51:27.108795    4033 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0814 09:51:27.112013    4033 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0814 09:51:27.280116    4033 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0814 09:51:27.499967    4033 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0814 09:51:27.500354    4033 kubeadm.go:310] 
	I0814 09:51:27.500381    4033 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0814 09:51:27.500384    4033 kubeadm.go:310] 
	I0814 09:51:27.500421    4033 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0814 09:51:27.500424    4033 kubeadm.go:310] 
	I0814 09:51:27.500457    4033 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0814 09:51:27.500494    4033 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0814 09:51:27.500524    4033 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0814 09:51:27.500556    4033 kubeadm.go:310] 
	I0814 09:51:27.500639    4033 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0814 09:51:27.500643    4033 kubeadm.go:310] 
	I0814 09:51:27.500666    4033 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0814 09:51:27.500675    4033 kubeadm.go:310] 
	I0814 09:51:27.500701    4033 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0814 09:51:27.500761    4033 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0814 09:51:27.500828    4033 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0814 09:51:27.500833    4033 kubeadm.go:310] 
	I0814 09:51:27.500878    4033 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0814 09:51:27.500954    4033 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0814 09:51:27.500960    4033 kubeadm.go:310] 
	I0814 09:51:27.501018    4033 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jgb9at.yikhjz5w53wfghkv \
	I0814 09:51:27.501121    4033 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6bc1bdbbe167ab66a20d6bf1c306e986530a9d0fee84c418f91e1b4312d4e260 \
	I0814 09:51:27.501133    4033 kubeadm.go:310] 	--control-plane 
	I0814 09:51:27.501136    4033 kubeadm.go:310] 
	I0814 09:51:27.501245    4033 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0814 09:51:27.501251    4033 kubeadm.go:310] 
	I0814 09:51:27.501290    4033 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jgb9at.yikhjz5w53wfghkv \
	I0814 09:51:27.501345    4033 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6bc1bdbbe167ab66a20d6bf1c306e986530a9d0fee84c418f91e1b4312d4e260 
	I0814 09:51:27.501483    4033 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 09:51:27.501493    4033 cni.go:84] Creating CNI manager for ""
	I0814 09:51:27.501502    4033 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0814 09:51:27.505704    4033 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 09:51:27.511692    4033 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 09:51:27.514643    4033 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 09:51:27.519956    4033 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 09:51:27.520051    4033 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-579000 minikube.k8s.io/updated_at=2024_08_14T09_51_27_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35 minikube.k8s.io/name=running-upgrade-579000 minikube.k8s.io/primary=true
	I0814 09:51:27.520090    4033 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:51:27.566773    4033 ops.go:34] apiserver oom_adj: -16
	I0814 09:51:27.566860    4033 kubeadm.go:1113] duration metric: took 46.857375ms to wait for elevateKubeSystemPrivileges
	I0814 09:51:27.566873    4033 kubeadm.go:394] duration metric: took 4m11.9787485s to StartCluster
	I0814 09:51:27.566882    4033 settings.go:142] acquiring lock: {Name:mk45b0aba98bc9a80a7cc9e2d664f69dcf74de9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:51:27.566957    4033 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19446-1067/kubeconfig
	I0814 09:51:27.567352    4033 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19446-1067/kubeconfig: {Name:mkd5271b15535f495ab8e34d870e7dbcadc9c40a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:51:27.567567    4033 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0814 09:51:27.567577    4033 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0814 09:51:27.567617    4033 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-579000"
	I0814 09:51:27.567632    4033 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-579000"
	W0814 09:51:27.567635    4033 addons.go:243] addon storage-provisioner should already be in state true
	I0814 09:51:27.567646    4033 host.go:66] Checking if "running-upgrade-579000" exists ...
	I0814 09:51:27.567645    4033 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-579000"
	I0814 09:51:27.567662    4033 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-579000"
	I0814 09:51:27.567646    4033 config.go:182] Loaded profile config "running-upgrade-579000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0814 09:51:27.568588    4033 kapi.go:59] client config for running-upgrade-579000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/running-upgrade-579000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/running-upgrade-579000/client.key", CAFile:"/Users/jenkins/minikube-integration/19446-1067/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10605fe30), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0814 09:51:27.568752    4033 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-579000"
	W0814 09:51:27.568761    4033 addons.go:243] addon default-storageclass should already be in state true
	I0814 09:51:27.568769    4033 host.go:66] Checking if "running-upgrade-579000" exists ...
	I0814 09:51:27.571697    4033 out.go:177] * Verifying Kubernetes components...
	I0814 09:51:27.572055    4033 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 09:51:27.575742    4033 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 09:51:27.575748    4033 sshutil.go:53] new ssh client: &{IP:localhost Port:50274 SSHKeyPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/running-upgrade-579000/id_rsa Username:docker}
	I0814 09:51:27.579599    4033 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 09:51:25.577929    4019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 09:51:25.581969    4019 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 09:51:25.581977    4019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 09:51:25.581985    4019 sshutil.go:53] new ssh client: &{IP:localhost Port:50234 SSHKeyPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/stopped-upgrade-996000/id_rsa Username:docker}
	I0814 09:51:25.649481    4019 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 09:51:25.655056    4019 api_server.go:52] waiting for apiserver process to appear ...
	I0814 09:51:25.655105    4019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:51:25.658948    4019 api_server.go:72] duration metric: took 92.663875ms to wait for apiserver process to appear ...
	I0814 09:51:25.658957    4019 api_server.go:88] waiting for apiserver healthz status ...
	I0814 09:51:25.658964    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:51:25.678131    4019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 09:51:25.724268    4019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 09:51:26.049147    4019 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0814 09:51:26.049160    4019 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0814 09:51:27.583671    4033 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 09:51:27.589655    4033 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 09:51:27.589662    4033 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 09:51:27.589668    4033 sshutil.go:53] new ssh client: &{IP:localhost Port:50274 SSHKeyPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/running-upgrade-579000/id_rsa Username:docker}
	I0814 09:51:27.679170    4033 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 09:51:27.685428    4033 api_server.go:52] waiting for apiserver process to appear ...
	I0814 09:51:27.685470    4033 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:51:27.689311    4033 api_server.go:72] duration metric: took 121.73775ms to wait for apiserver process to appear ...
	I0814 09:51:27.689318    4033 api_server.go:88] waiting for apiserver healthz status ...
	I0814 09:51:27.689325    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:51:27.718669    4033 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 09:51:27.742350    4033 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 09:51:28.066946    4033 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0814 09:51:28.066958    4033 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0814 09:51:30.660852    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:51:30.660886    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:51:32.691168    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:51:32.691190    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:51:35.660947    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:51:35.660966    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:51:37.691185    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:51:37.691207    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:51:40.661047    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:51:40.661074    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:51:42.691285    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:51:42.691331    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:51:45.661210    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:51:45.661234    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:51:47.691535    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:51:47.691582    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:51:50.661488    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:51:50.661533    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:51:52.691920    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:51:52.691947    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:51:55.662000    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:51:55.662028    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0814 09:51:56.050199    4019 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0814 09:51:56.054392    4019 out.go:177] * Enabled addons: storage-provisioner
	I0814 09:51:57.692447    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:51:57.692483    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0814 09:51:58.067002    4033 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0814 09:51:58.071267    4033 out.go:177] * Enabled addons: storage-provisioner
	I0814 09:51:56.065315    4019 addons.go:510] duration metric: took 30.50035375s for enable addons: enabled=[storage-provisioner]
	I0814 09:51:58.079267    4033 addons.go:510] duration metric: took 30.513024458s for enable addons: enabled=[storage-provisioner]
	I0814 09:52:00.662646    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:52:00.662697    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:52:02.693186    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:52:02.693218    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:52:05.663598    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:52:05.663638    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:52:07.694122    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:52:07.694174    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:52:10.665086    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:52:10.665134    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:52:12.694283    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:52:12.694350    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:52:15.665929    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:52:15.665959    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:52:17.695655    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:52:17.695679    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:52:20.667748    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:52:20.667786    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:52:22.697323    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:52:22.697369    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:52:25.669857    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:52:25.670032    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:52:25.681449    4019 logs.go:276] 1 containers: [66f59011937b]
	I0814 09:52:25.681522    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:52:25.692672    4019 logs.go:276] 1 containers: [abfc7e49a585]
	I0814 09:52:25.692747    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:52:25.704012    4019 logs.go:276] 2 containers: [9c8867ac9a63 b48b5e6429a9]
	I0814 09:52:25.704079    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:52:25.715099    4019 logs.go:276] 1 containers: [e3a20e2e124f]
	I0814 09:52:25.715170    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:52:25.725810    4019 logs.go:276] 1 containers: [bfe0052f807c]
	I0814 09:52:25.725875    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:52:25.736909    4019 logs.go:276] 1 containers: [d4c01ab8fbcc]
	I0814 09:52:25.736968    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:52:25.748230    4019 logs.go:276] 0 containers: []
	W0814 09:52:25.748243    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:52:25.748306    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:52:25.759107    4019 logs.go:276] 1 containers: [8b4bf1194ad2]
	I0814 09:52:25.759120    4019 logs.go:123] Gathering logs for kube-proxy [bfe0052f807c] ...
	I0814 09:52:25.759127    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe0052f807c"
	I0814 09:52:25.771295    4019 logs.go:123] Gathering logs for storage-provisioner [8b4bf1194ad2] ...
	I0814 09:52:25.771306    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b4bf1194ad2"
	I0814 09:52:25.783774    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:52:25.783784    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:52:25.819782    4019 logs.go:123] Gathering logs for kube-apiserver [66f59011937b] ...
	I0814 09:52:25.819795    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f59011937b"
	I0814 09:52:25.836818    4019 logs.go:123] Gathering logs for etcd [abfc7e49a585] ...
	I0814 09:52:25.836829    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abfc7e49a585"
	I0814 09:52:25.852624    4019 logs.go:123] Gathering logs for coredns [9c8867ac9a63] ...
	I0814 09:52:25.852635    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c8867ac9a63"
	I0814 09:52:25.864866    4019 logs.go:123] Gathering logs for coredns [b48b5e6429a9] ...
	I0814 09:52:25.864882    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b48b5e6429a9"
	I0814 09:52:25.877589    4019 logs.go:123] Gathering logs for kube-scheduler [e3a20e2e124f] ...
	I0814 09:52:25.877601    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a20e2e124f"
	I0814 09:52:25.894162    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:52:25.894174    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0814 09:52:25.927717    4019 logs.go:138] Found kubelet problem: Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	W0814 09:52:25.927811    4019 logs.go:138] Found kubelet problem: Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	I0814 09:52:25.928946    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:52:25.928951    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:52:25.933163    4019 logs.go:123] Gathering logs for kube-controller-manager [d4c01ab8fbcc] ...
	I0814 09:52:25.933170    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4c01ab8fbcc"
	I0814 09:52:25.955958    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:52:25.955972    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:52:25.979483    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:52:25.979491    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:52:25.991753    4019 out.go:304] Setting ErrFile to fd 2...
	I0814 09:52:25.991763    4019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0814 09:52:25.991795    4019 out.go:239] X Problems detected in kubelet:
	W0814 09:52:25.991801    4019 out.go:239]   Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	W0814 09:52:25.991805    4019 out.go:239]   Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	I0814 09:52:25.991862    4019 out.go:304] Setting ErrFile to fd 2...
	I0814 09:52:25.991882    4019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:52:27.699388    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:52:27.699500    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:52:27.710830    4033 logs.go:276] 1 containers: [741deb167866]
	I0814 09:52:27.710903    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:52:27.721359    4033 logs.go:276] 1 containers: [925172ee97cb]
	I0814 09:52:27.721426    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:52:27.732558    4033 logs.go:276] 2 containers: [ba7f625babd1 178c997768be]
	I0814 09:52:27.732628    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:52:27.742880    4033 logs.go:276] 1 containers: [2f06dc386c61]
	I0814 09:52:27.742947    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:52:27.753549    4033 logs.go:276] 1 containers: [4ffe967a65fd]
	I0814 09:52:27.753617    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:52:27.764020    4033 logs.go:276] 1 containers: [2d9d0cc2c886]
	I0814 09:52:27.764095    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:52:27.775056    4033 logs.go:276] 0 containers: []
	W0814 09:52:27.775070    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:52:27.775129    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:52:27.787424    4033 logs.go:276] 1 containers: [cd7351187c71]
	I0814 09:52:27.787440    4033 logs.go:123] Gathering logs for kube-controller-manager [2d9d0cc2c886] ...
	I0814 09:52:27.787446    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9d0cc2c886"
	I0814 09:52:27.805679    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:52:27.805690    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:52:27.830437    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:52:27.830453    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:52:27.834895    4033 logs.go:123] Gathering logs for kube-scheduler [2f06dc386c61] ...
	I0814 09:52:27.834902    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f06dc386c61"
	I0814 09:52:27.850479    4033 logs.go:123] Gathering logs for kube-proxy [4ffe967a65fd] ...
	I0814 09:52:27.850490    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ffe967a65fd"
	I0814 09:52:27.870927    4033 logs.go:123] Gathering logs for etcd [925172ee97cb] ...
	I0814 09:52:27.870943    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925172ee97cb"
	I0814 09:52:27.885506    4033 logs.go:123] Gathering logs for coredns [ba7f625babd1] ...
	I0814 09:52:27.885515    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba7f625babd1"
	I0814 09:52:27.897611    4033 logs.go:123] Gathering logs for coredns [178c997768be] ...
	I0814 09:52:27.897621    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 178c997768be"
	I0814 09:52:27.909501    4033 logs.go:123] Gathering logs for storage-provisioner [cd7351187c71] ...
	I0814 09:52:27.909517    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7351187c71"
	I0814 09:52:27.920954    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:52:27.920964    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:52:27.932553    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:52:27.932562    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:52:27.968072    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:52:27.968080    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:52:28.039866    4033 logs.go:123] Gathering logs for kube-apiserver [741deb167866] ...
	I0814 09:52:28.039877    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 741deb167866"
	I0814 09:52:30.556177    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:52:35.558365    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:52:35.558525    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:52:35.570793    4033 logs.go:276] 1 containers: [741deb167866]
	I0814 09:52:35.570869    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:52:35.585371    4033 logs.go:276] 1 containers: [925172ee97cb]
	I0814 09:52:35.585448    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:52:35.599254    4033 logs.go:276] 2 containers: [ba7f625babd1 178c997768be]
	I0814 09:52:35.599327    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:52:35.609343    4033 logs.go:276] 1 containers: [2f06dc386c61]
	I0814 09:52:35.609411    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:52:35.620431    4033 logs.go:276] 1 containers: [4ffe967a65fd]
	I0814 09:52:35.620501    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:52:35.631002    4033 logs.go:276] 1 containers: [2d9d0cc2c886]
	I0814 09:52:35.631070    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:52:35.641008    4033 logs.go:276] 0 containers: []
	W0814 09:52:35.641019    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:52:35.641074    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:52:35.995572    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:52:35.652420    4033 logs.go:276] 1 containers: [cd7351187c71]
	I0814 09:52:35.652436    4033 logs.go:123] Gathering logs for coredns [ba7f625babd1] ...
	I0814 09:52:35.652442    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba7f625babd1"
	I0814 09:52:35.664545    4033 logs.go:123] Gathering logs for coredns [178c997768be] ...
	I0814 09:52:35.664556    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 178c997768be"
	I0814 09:52:35.677843    4033 logs.go:123] Gathering logs for kube-scheduler [2f06dc386c61] ...
	I0814 09:52:35.677854    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f06dc386c61"
	I0814 09:52:35.692391    4033 logs.go:123] Gathering logs for kube-controller-manager [2d9d0cc2c886] ...
	I0814 09:52:35.692402    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9d0cc2c886"
	I0814 09:52:35.710519    4033 logs.go:123] Gathering logs for storage-provisioner [cd7351187c71] ...
	I0814 09:52:35.710529    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7351187c71"
	I0814 09:52:35.722083    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:52:35.722096    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:52:35.745221    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:52:35.745230    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:52:35.749591    4033 logs.go:123] Gathering logs for etcd [925172ee97cb] ...
	I0814 09:52:35.749601    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925172ee97cb"
	I0814 09:52:35.763446    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:52:35.763459    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:52:35.776289    4033 logs.go:123] Gathering logs for kube-apiserver [741deb167866] ...
	I0814 09:52:35.776301    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 741deb167866"
	I0814 09:52:35.790956    4033 logs.go:123] Gathering logs for kube-proxy [4ffe967a65fd] ...
	I0814 09:52:35.790967    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ffe967a65fd"
	I0814 09:52:35.803243    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:52:35.803253    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:52:35.838459    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:52:35.838470    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:52:38.377273    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:52:40.997569    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:52:40.997735    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:52:41.009824    4019 logs.go:276] 1 containers: [66f59011937b]
	I0814 09:52:41.009901    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:52:41.020793    4019 logs.go:276] 1 containers: [abfc7e49a585]
	I0814 09:52:41.020863    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:52:41.031903    4019 logs.go:276] 2 containers: [9c8867ac9a63 b48b5e6429a9]
	I0814 09:52:41.031973    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:52:41.042400    4019 logs.go:276] 1 containers: [e3a20e2e124f]
	I0814 09:52:41.042466    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:52:41.052852    4019 logs.go:276] 1 containers: [bfe0052f807c]
	I0814 09:52:41.052919    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:52:41.063001    4019 logs.go:276] 1 containers: [d4c01ab8fbcc]
	I0814 09:52:41.063073    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:52:41.073095    4019 logs.go:276] 0 containers: []
	W0814 09:52:41.073105    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:52:41.073162    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:52:41.083766    4019 logs.go:276] 1 containers: [8b4bf1194ad2]
	I0814 09:52:41.083781    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:52:41.083786    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:52:41.108804    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:52:41.108813    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:52:41.120340    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:52:41.120351    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0814 09:52:41.155854    4019 logs.go:138] Found kubelet problem: Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	W0814 09:52:41.155948    4019 logs.go:138] Found kubelet problem: Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	I0814 09:52:41.157142    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:52:41.157149    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:52:41.161102    4019 logs.go:123] Gathering logs for kube-apiserver [66f59011937b] ...
	I0814 09:52:41.161108    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f59011937b"
	I0814 09:52:41.175291    4019 logs.go:123] Gathering logs for coredns [b48b5e6429a9] ...
	I0814 09:52:41.175300    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b48b5e6429a9"
	I0814 09:52:41.195589    4019 logs.go:123] Gathering logs for kube-scheduler [e3a20e2e124f] ...
	I0814 09:52:41.195599    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a20e2e124f"
	I0814 09:52:41.210230    4019 logs.go:123] Gathering logs for storage-provisioner [8b4bf1194ad2] ...
	I0814 09:52:41.210240    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b4bf1194ad2"
	I0814 09:52:41.221806    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:52:41.221818    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:52:41.261429    4019 logs.go:123] Gathering logs for etcd [abfc7e49a585] ...
	I0814 09:52:41.261443    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abfc7e49a585"
	I0814 09:52:41.276082    4019 logs.go:123] Gathering logs for coredns [9c8867ac9a63] ...
	I0814 09:52:41.276094    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c8867ac9a63"
	I0814 09:52:41.288234    4019 logs.go:123] Gathering logs for kube-proxy [bfe0052f807c] ...
	I0814 09:52:41.288244    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe0052f807c"
	I0814 09:52:41.303707    4019 logs.go:123] Gathering logs for kube-controller-manager [d4c01ab8fbcc] ...
	I0814 09:52:41.303719    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4c01ab8fbcc"
	I0814 09:52:41.321373    4019 out.go:304] Setting ErrFile to fd 2...
	I0814 09:52:41.321384    4019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0814 09:52:41.321413    4019 out.go:239] X Problems detected in kubelet:
	W0814 09:52:41.321417    4019 out.go:239]   Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	W0814 09:52:41.321421    4019 out.go:239]   Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	I0814 09:52:41.321432    4019 out.go:304] Setting ErrFile to fd 2...
	I0814 09:52:41.321443    4019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:52:43.379237    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:52:43.379375    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:52:43.392911    4033 logs.go:276] 1 containers: [741deb167866]
	I0814 09:52:43.392985    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:52:43.404058    4033 logs.go:276] 1 containers: [925172ee97cb]
	I0814 09:52:43.404124    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:52:43.414650    4033 logs.go:276] 2 containers: [ba7f625babd1 178c997768be]
	I0814 09:52:43.414719    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:52:43.429210    4033 logs.go:276] 1 containers: [2f06dc386c61]
	I0814 09:52:43.429274    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:52:43.439045    4033 logs.go:276] 1 containers: [4ffe967a65fd]
	I0814 09:52:43.439107    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:52:43.449382    4033 logs.go:276] 1 containers: [2d9d0cc2c886]
	I0814 09:52:43.449449    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:52:43.459759    4033 logs.go:276] 0 containers: []
	W0814 09:52:43.459770    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:52:43.459818    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:52:43.470105    4033 logs.go:276] 1 containers: [cd7351187c71]
	I0814 09:52:43.470119    4033 logs.go:123] Gathering logs for etcd [925172ee97cb] ...
	I0814 09:52:43.470124    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925172ee97cb"
	I0814 09:52:43.483994    4033 logs.go:123] Gathering logs for coredns [ba7f625babd1] ...
	I0814 09:52:43.484009    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba7f625babd1"
	I0814 09:52:43.495969    4033 logs.go:123] Gathering logs for coredns [178c997768be] ...
	I0814 09:52:43.495981    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 178c997768be"
	I0814 09:52:43.508032    4033 logs.go:123] Gathering logs for kube-scheduler [2f06dc386c61] ...
	I0814 09:52:43.508041    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f06dc386c61"
	I0814 09:52:43.522900    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:52:43.522910    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:52:43.547982    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:52:43.547991    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:52:43.559177    4033 logs.go:123] Gathering logs for storage-provisioner [cd7351187c71] ...
	I0814 09:52:43.559188    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7351187c71"
	I0814 09:52:43.571002    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:52:43.571013    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:52:43.604404    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:52:43.604412    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:52:43.609121    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:52:43.609127    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:52:43.643850    4033 logs.go:123] Gathering logs for kube-apiserver [741deb167866] ...
	I0814 09:52:43.643862    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 741deb167866"
	I0814 09:52:43.658525    4033 logs.go:123] Gathering logs for kube-proxy [4ffe967a65fd] ...
	I0814 09:52:43.658536    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ffe967a65fd"
	I0814 09:52:43.674148    4033 logs.go:123] Gathering logs for kube-controller-manager [2d9d0cc2c886] ...
	I0814 09:52:43.674161    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9d0cc2c886"
	I0814 09:52:46.193905    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:52:51.325137    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:52:51.195602    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:52:51.195810    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:52:51.213418    4033 logs.go:276] 1 containers: [741deb167866]
	I0814 09:52:51.213502    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:52:51.226537    4033 logs.go:276] 1 containers: [925172ee97cb]
	I0814 09:52:51.226617    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:52:51.242357    4033 logs.go:276] 2 containers: [ba7f625babd1 178c997768be]
	I0814 09:52:51.242429    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:52:51.252599    4033 logs.go:276] 1 containers: [2f06dc386c61]
	I0814 09:52:51.252671    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:52:51.263102    4033 logs.go:276] 1 containers: [4ffe967a65fd]
	I0814 09:52:51.263168    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:52:51.273252    4033 logs.go:276] 1 containers: [2d9d0cc2c886]
	I0814 09:52:51.273321    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:52:51.283273    4033 logs.go:276] 0 containers: []
	W0814 09:52:51.283286    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:52:51.283349    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:52:51.294321    4033 logs.go:276] 1 containers: [cd7351187c71]
	I0814 09:52:51.294335    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:52:51.294340    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:52:51.298855    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:52:51.298863    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:52:51.336178    4033 logs.go:123] Gathering logs for coredns [ba7f625babd1] ...
	I0814 09:52:51.336188    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba7f625babd1"
	I0814 09:52:51.348505    4033 logs.go:123] Gathering logs for coredns [178c997768be] ...
	I0814 09:52:51.348517    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 178c997768be"
	I0814 09:52:51.362363    4033 logs.go:123] Gathering logs for kube-proxy [4ffe967a65fd] ...
	I0814 09:52:51.362378    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ffe967a65fd"
	I0814 09:52:51.375065    4033 logs.go:123] Gathering logs for kube-controller-manager [2d9d0cc2c886] ...
	I0814 09:52:51.375075    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9d0cc2c886"
	I0814 09:52:51.393470    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:52:51.393481    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:52:51.405159    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:52:51.405171    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:52:51.439427    4033 logs.go:123] Gathering logs for kube-apiserver [741deb167866] ...
	I0814 09:52:51.439437    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 741deb167866"
	I0814 09:52:51.454152    4033 logs.go:123] Gathering logs for etcd [925172ee97cb] ...
	I0814 09:52:51.454164    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925172ee97cb"
	I0814 09:52:51.468625    4033 logs.go:123] Gathering logs for kube-scheduler [2f06dc386c61] ...
	I0814 09:52:51.468634    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f06dc386c61"
	I0814 09:52:51.483338    4033 logs.go:123] Gathering logs for storage-provisioner [cd7351187c71] ...
	I0814 09:52:51.483348    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7351187c71"
	I0814 09:52:51.494995    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:52:51.495006    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:52:54.020128    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:52:56.327125    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:52:56.327306    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:52:56.345676    4019 logs.go:276] 1 containers: [66f59011937b]
	I0814 09:52:56.345762    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:52:56.358588    4019 logs.go:276] 1 containers: [abfc7e49a585]
	I0814 09:52:56.358663    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:52:56.370225    4019 logs.go:276] 2 containers: [9c8867ac9a63 b48b5e6429a9]
	I0814 09:52:56.370289    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:52:56.381465    4019 logs.go:276] 1 containers: [e3a20e2e124f]
	I0814 09:52:56.381534    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:52:56.392881    4019 logs.go:276] 1 containers: [bfe0052f807c]
	I0814 09:52:56.392946    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:52:56.404772    4019 logs.go:276] 1 containers: [d4c01ab8fbcc]
	I0814 09:52:56.404841    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:52:56.414846    4019 logs.go:276] 0 containers: []
	W0814 09:52:56.414858    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:52:56.414918    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:52:56.425734    4019 logs.go:276] 1 containers: [8b4bf1194ad2]
	I0814 09:52:56.425750    4019 logs.go:123] Gathering logs for kube-apiserver [66f59011937b] ...
	I0814 09:52:56.425755    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f59011937b"
	I0814 09:52:56.440049    4019 logs.go:123] Gathering logs for etcd [abfc7e49a585] ...
	I0814 09:52:56.440060    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abfc7e49a585"
	I0814 09:52:56.454717    4019 logs.go:123] Gathering logs for coredns [9c8867ac9a63] ...
	I0814 09:52:56.454728    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c8867ac9a63"
	I0814 09:52:56.465997    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:52:56.466008    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:52:56.490586    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:52:56.490597    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:52:56.495193    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:52:56.495199    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:52:56.530193    4019 logs.go:123] Gathering logs for kube-scheduler [e3a20e2e124f] ...
	I0814 09:52:56.530203    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a20e2e124f"
	I0814 09:52:56.544896    4019 logs.go:123] Gathering logs for kube-proxy [bfe0052f807c] ...
	I0814 09:52:56.544908    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe0052f807c"
	I0814 09:52:56.556788    4019 logs.go:123] Gathering logs for kube-controller-manager [d4c01ab8fbcc] ...
	I0814 09:52:56.556799    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4c01ab8fbcc"
	I0814 09:52:56.574588    4019 logs.go:123] Gathering logs for storage-provisioner [8b4bf1194ad2] ...
	I0814 09:52:56.574601    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b4bf1194ad2"
	I0814 09:52:56.585959    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:52:56.585972    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:52:56.598240    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:52:56.598251    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0814 09:52:56.632802    4019 logs.go:138] Found kubelet problem: Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	W0814 09:52:56.632898    4019 logs.go:138] Found kubelet problem: Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	I0814 09:52:56.634042    4019 logs.go:123] Gathering logs for coredns [b48b5e6429a9] ...
	I0814 09:52:56.634046    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b48b5e6429a9"
	I0814 09:52:56.646081    4019 out.go:304] Setting ErrFile to fd 2...
	I0814 09:52:56.646093    4019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0814 09:52:56.646120    4019 out.go:239] X Problems detected in kubelet:
	W0814 09:52:56.646125    4019 out.go:239]   Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	W0814 09:52:56.646129    4019 out.go:239]   Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	I0814 09:52:56.646133    4019 out.go:304] Setting ErrFile to fd 2...
	I0814 09:52:56.646136    4019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:52:59.022226    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:52:59.022409    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:52:59.036970    4033 logs.go:276] 1 containers: [741deb167866]
	I0814 09:52:59.037058    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:52:59.049138    4033 logs.go:276] 1 containers: [925172ee97cb]
	I0814 09:52:59.049214    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:52:59.059730    4033 logs.go:276] 2 containers: [ba7f625babd1 178c997768be]
	I0814 09:52:59.059798    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:52:59.070502    4033 logs.go:276] 1 containers: [2f06dc386c61]
	I0814 09:52:59.070574    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:52:59.080758    4033 logs.go:276] 1 containers: [4ffe967a65fd]
	I0814 09:52:59.080834    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:52:59.090958    4033 logs.go:276] 1 containers: [2d9d0cc2c886]
	I0814 09:52:59.091021    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:52:59.101443    4033 logs.go:276] 0 containers: []
	W0814 09:52:59.101454    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:52:59.101518    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:52:59.111589    4033 logs.go:276] 1 containers: [cd7351187c71]
	I0814 09:52:59.111604    4033 logs.go:123] Gathering logs for coredns [ba7f625babd1] ...
	I0814 09:52:59.111609    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba7f625babd1"
	I0814 09:52:59.123837    4033 logs.go:123] Gathering logs for coredns [178c997768be] ...
	I0814 09:52:59.123847    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 178c997768be"
	I0814 09:52:59.135485    4033 logs.go:123] Gathering logs for kube-scheduler [2f06dc386c61] ...
	I0814 09:52:59.135496    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f06dc386c61"
	I0814 09:52:59.150981    4033 logs.go:123] Gathering logs for kube-proxy [4ffe967a65fd] ...
	I0814 09:52:59.150991    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ffe967a65fd"
	I0814 09:52:59.168241    4033 logs.go:123] Gathering logs for storage-provisioner [cd7351187c71] ...
	I0814 09:52:59.168252    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7351187c71"
	I0814 09:52:59.180358    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:52:59.180369    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:52:59.203828    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:52:59.203837    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:52:59.215040    4033 logs.go:123] Gathering logs for kube-apiserver [741deb167866] ...
	I0814 09:52:59.215052    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 741deb167866"
	I0814 09:52:59.229172    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:52:59.229182    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:52:59.233715    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:52:59.233721    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:52:59.270501    4033 logs.go:123] Gathering logs for etcd [925172ee97cb] ...
	I0814 09:52:59.270511    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925172ee97cb"
	I0814 09:52:59.289244    4033 logs.go:123] Gathering logs for kube-controller-manager [2d9d0cc2c886] ...
	I0814 09:52:59.289256    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9d0cc2c886"
	I0814 09:52:59.306714    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:52:59.306726    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:53:01.844852    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:53:06.649858    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:53:06.846889    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:53:06.847084    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:53:06.866762    4033 logs.go:276] 1 containers: [741deb167866]
	I0814 09:53:06.866865    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:53:06.881689    4033 logs.go:276] 1 containers: [925172ee97cb]
	I0814 09:53:06.881764    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:53:06.893883    4033 logs.go:276] 2 containers: [ba7f625babd1 178c997768be]
	I0814 09:53:06.893951    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:53:06.905061    4033 logs.go:276] 1 containers: [2f06dc386c61]
	I0814 09:53:06.905133    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:53:06.916077    4033 logs.go:276] 1 containers: [4ffe967a65fd]
	I0814 09:53:06.916146    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:53:06.926818    4033 logs.go:276] 1 containers: [2d9d0cc2c886]
	I0814 09:53:06.926888    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:53:06.937311    4033 logs.go:276] 0 containers: []
	W0814 09:53:06.937321    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:53:06.937379    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:53:06.948394    4033 logs.go:276] 1 containers: [cd7351187c71]
	I0814 09:53:06.948409    4033 logs.go:123] Gathering logs for coredns [ba7f625babd1] ...
	I0814 09:53:06.948414    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba7f625babd1"
	I0814 09:53:06.960898    4033 logs.go:123] Gathering logs for kube-controller-manager [2d9d0cc2c886] ...
	I0814 09:53:06.960909    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9d0cc2c886"
	I0814 09:53:06.978751    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:53:06.978762    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:53:07.004180    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:53:07.004188    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:53:07.015445    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:53:07.015459    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:53:07.049497    4033 logs.go:123] Gathering logs for kube-apiserver [741deb167866] ...
	I0814 09:53:07.049510    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 741deb167866"
	I0814 09:53:07.063900    4033 logs.go:123] Gathering logs for etcd [925172ee97cb] ...
	I0814 09:53:07.063911    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925172ee97cb"
	I0814 09:53:07.077950    4033 logs.go:123] Gathering logs for coredns [178c997768be] ...
	I0814 09:53:07.077962    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 178c997768be"
	I0814 09:53:07.089815    4033 logs.go:123] Gathering logs for kube-scheduler [2f06dc386c61] ...
	I0814 09:53:07.089824    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f06dc386c61"
	I0814 09:53:07.105144    4033 logs.go:123] Gathering logs for kube-proxy [4ffe967a65fd] ...
	I0814 09:53:07.105157    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ffe967a65fd"
	I0814 09:53:07.122305    4033 logs.go:123] Gathering logs for storage-provisioner [cd7351187c71] ...
	I0814 09:53:07.122318    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7351187c71"
	I0814 09:53:07.133885    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:53:07.133895    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:53:07.168273    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:53:07.168281    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:53:09.674676    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:53:11.651965    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:53:11.652109    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:53:11.666059    4019 logs.go:276] 1 containers: [66f59011937b]
	I0814 09:53:11.666144    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:53:11.681888    4019 logs.go:276] 1 containers: [abfc7e49a585]
	I0814 09:53:11.681959    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:53:11.697783    4019 logs.go:276] 2 containers: [9c8867ac9a63 b48b5e6429a9]
	I0814 09:53:11.697854    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:53:11.708595    4019 logs.go:276] 1 containers: [e3a20e2e124f]
	I0814 09:53:11.708665    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:53:11.719307    4019 logs.go:276] 1 containers: [bfe0052f807c]
	I0814 09:53:11.719374    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:53:11.729974    4019 logs.go:276] 1 containers: [d4c01ab8fbcc]
	I0814 09:53:11.730032    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:53:11.740342    4019 logs.go:276] 0 containers: []
	W0814 09:53:11.740355    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:53:11.740415    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:53:11.751226    4019 logs.go:276] 1 containers: [8b4bf1194ad2]
	I0814 09:53:11.751239    4019 logs.go:123] Gathering logs for coredns [9c8867ac9a63] ...
	I0814 09:53:11.751244    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c8867ac9a63"
	I0814 09:53:11.763337    4019 logs.go:123] Gathering logs for coredns [b48b5e6429a9] ...
	I0814 09:53:11.763348    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b48b5e6429a9"
	I0814 09:53:11.775587    4019 logs.go:123] Gathering logs for kube-proxy [bfe0052f807c] ...
	I0814 09:53:11.775601    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe0052f807c"
	I0814 09:53:11.787153    4019 logs.go:123] Gathering logs for kube-controller-manager [d4c01ab8fbcc] ...
	I0814 09:53:11.787170    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4c01ab8fbcc"
	I0814 09:53:11.805522    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:53:11.805539    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0814 09:53:11.839723    4019 logs.go:138] Found kubelet problem: Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	W0814 09:53:11.839819    4019 logs.go:138] Found kubelet problem: Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	I0814 09:53:11.841039    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:53:11.841046    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:53:11.847862    4019 logs.go:123] Gathering logs for kube-apiserver [66f59011937b] ...
	I0814 09:53:11.847873    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f59011937b"
	I0814 09:53:11.863555    4019 logs.go:123] Gathering logs for etcd [abfc7e49a585] ...
	I0814 09:53:11.863565    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abfc7e49a585"
	I0814 09:53:11.877644    4019 logs.go:123] Gathering logs for storage-provisioner [8b4bf1194ad2] ...
	I0814 09:53:11.877654    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b4bf1194ad2"
	I0814 09:53:11.890305    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:53:11.890316    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:53:11.926238    4019 logs.go:123] Gathering logs for kube-scheduler [e3a20e2e124f] ...
	I0814 09:53:11.926248    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a20e2e124f"
	I0814 09:53:11.941322    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:53:11.941330    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:53:11.965999    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:53:11.966009    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:53:11.978010    4019 out.go:304] Setting ErrFile to fd 2...
	I0814 09:53:11.978020    4019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0814 09:53:11.978044    4019 out.go:239] X Problems detected in kubelet:
	W0814 09:53:11.978049    4019 out.go:239]   Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	W0814 09:53:11.978054    4019 out.go:239]   Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	I0814 09:53:11.978072    4019 out.go:304] Setting ErrFile to fd 2...
	I0814 09:53:11.978076    4019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:53:14.676758    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:53:14.676982    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:53:14.704992    4033 logs.go:276] 1 containers: [741deb167866]
	I0814 09:53:14.705143    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:53:14.722370    4033 logs.go:276] 1 containers: [925172ee97cb]
	I0814 09:53:14.722451    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:53:14.739083    4033 logs.go:276] 2 containers: [ba7f625babd1 178c997768be]
	I0814 09:53:14.739148    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:53:14.750072    4033 logs.go:276] 1 containers: [2f06dc386c61]
	I0814 09:53:14.750138    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:53:14.760728    4033 logs.go:276] 1 containers: [4ffe967a65fd]
	I0814 09:53:14.760793    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:53:14.771499    4033 logs.go:276] 1 containers: [2d9d0cc2c886]
	I0814 09:53:14.771564    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:53:14.781711    4033 logs.go:276] 0 containers: []
	W0814 09:53:14.781722    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:53:14.781775    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:53:14.795548    4033 logs.go:276] 1 containers: [cd7351187c71]
	I0814 09:53:14.795564    4033 logs.go:123] Gathering logs for storage-provisioner [cd7351187c71] ...
	I0814 09:53:14.795570    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7351187c71"
	I0814 09:53:14.810836    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:53:14.810850    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:53:14.815633    4033 logs.go:123] Gathering logs for kube-apiserver [741deb167866] ...
	I0814 09:53:14.815639    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 741deb167866"
	I0814 09:53:14.831910    4033 logs.go:123] Gathering logs for etcd [925172ee97cb] ...
	I0814 09:53:14.831921    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925172ee97cb"
	I0814 09:53:14.846195    4033 logs.go:123] Gathering logs for coredns [ba7f625babd1] ...
	I0814 09:53:14.846205    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba7f625babd1"
	I0814 09:53:14.859831    4033 logs.go:123] Gathering logs for coredns [178c997768be] ...
	I0814 09:53:14.859845    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 178c997768be"
	I0814 09:53:14.871998    4033 logs.go:123] Gathering logs for kube-proxy [4ffe967a65fd] ...
	I0814 09:53:14.872009    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ffe967a65fd"
	I0814 09:53:14.884280    4033 logs.go:123] Gathering logs for kube-controller-manager [2d9d0cc2c886] ...
	I0814 09:53:14.884290    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9d0cc2c886"
	I0814 09:53:14.901295    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:53:14.901305    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:53:14.925003    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:53:14.925012    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:53:14.959319    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:53:14.959327    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:53:14.999495    4033 logs.go:123] Gathering logs for kube-scheduler [2f06dc386c61] ...
	I0814 09:53:14.999506    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f06dc386c61"
	I0814 09:53:15.014563    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:53:15.014572    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:53:17.528775    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:53:21.980749    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:53:22.530847    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:53:22.531003    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:53:22.545429    4033 logs.go:276] 1 containers: [741deb167866]
	I0814 09:53:22.545509    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:53:22.556436    4033 logs.go:276] 1 containers: [925172ee97cb]
	I0814 09:53:22.556510    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:53:22.568791    4033 logs.go:276] 2 containers: [ba7f625babd1 178c997768be]
	I0814 09:53:22.568865    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:53:22.579518    4033 logs.go:276] 1 containers: [2f06dc386c61]
	I0814 09:53:22.579583    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:53:22.590096    4033 logs.go:276] 1 containers: [4ffe967a65fd]
	I0814 09:53:22.590169    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:53:22.600304    4033 logs.go:276] 1 containers: [2d9d0cc2c886]
	I0814 09:53:22.600374    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:53:22.610975    4033 logs.go:276] 0 containers: []
	W0814 09:53:22.610987    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:53:22.611044    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:53:22.621792    4033 logs.go:276] 1 containers: [cd7351187c71]
	I0814 09:53:22.621808    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:53:22.621814    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:53:22.657490    4033 logs.go:123] Gathering logs for etcd [925172ee97cb] ...
	I0814 09:53:22.657499    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925172ee97cb"
	I0814 09:53:22.671813    4033 logs.go:123] Gathering logs for coredns [ba7f625babd1] ...
	I0814 09:53:22.671823    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba7f625babd1"
	I0814 09:53:22.686704    4033 logs.go:123] Gathering logs for coredns [178c997768be] ...
	I0814 09:53:22.686714    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 178c997768be"
	I0814 09:53:22.698557    4033 logs.go:123] Gathering logs for kube-scheduler [2f06dc386c61] ...
	I0814 09:53:22.698568    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f06dc386c61"
	I0814 09:53:22.714211    4033 logs.go:123] Gathering logs for kube-proxy [4ffe967a65fd] ...
	I0814 09:53:22.714224    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ffe967a65fd"
	I0814 09:53:22.725733    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:53:22.725744    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:53:22.730715    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:53:22.730723    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:53:22.765945    4033 logs.go:123] Gathering logs for kube-apiserver [741deb167866] ...
	I0814 09:53:22.765956    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 741deb167866"
	I0814 09:53:22.783473    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:53:22.783482    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:53:22.797466    4033 logs.go:123] Gathering logs for kube-controller-manager [2d9d0cc2c886] ...
	I0814 09:53:22.797480    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9d0cc2c886"
	I0814 09:53:22.815578    4033 logs.go:123] Gathering logs for storage-provisioner [cd7351187c71] ...
	I0814 09:53:22.815591    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7351187c71"
	I0814 09:53:22.834197    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:53:22.834208    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:53:25.360862    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:53:26.981022    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:53:26.981114    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:53:26.993196    4019 logs.go:276] 1 containers: [66f59011937b]
	I0814 09:53:26.993270    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:53:27.004252    4019 logs.go:276] 1 containers: [abfc7e49a585]
	I0814 09:53:27.004327    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:53:27.015106    4019 logs.go:276] 3 containers: [f1f20b457441 9c8867ac9a63 b48b5e6429a9]
	I0814 09:53:27.015183    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:53:27.026263    4019 logs.go:276] 1 containers: [e3a20e2e124f]
	I0814 09:53:27.026331    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:53:27.036860    4019 logs.go:276] 1 containers: [bfe0052f807c]
	I0814 09:53:27.036929    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:53:27.047395    4019 logs.go:276] 1 containers: [d4c01ab8fbcc]
	I0814 09:53:27.047463    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:53:27.057670    4019 logs.go:276] 0 containers: []
	W0814 09:53:27.057680    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:53:27.057736    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:53:27.067982    4019 logs.go:276] 1 containers: [8b4bf1194ad2]
	I0814 09:53:27.068003    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:53:27.068008    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0814 09:53:27.099696    4019 logs.go:138] Found kubelet problem: Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	W0814 09:53:27.099791    4019 logs.go:138] Found kubelet problem: Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	I0814 09:53:27.101006    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:53:27.101011    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:53:27.136306    4019 logs.go:123] Gathering logs for coredns [f1f20b457441] ...
	I0814 09:53:27.136316    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1f20b457441"
	I0814 09:53:27.147969    4019 logs.go:123] Gathering logs for coredns [9c8867ac9a63] ...
	I0814 09:53:27.147981    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c8867ac9a63"
	I0814 09:53:27.160280    4019 logs.go:123] Gathering logs for kube-proxy [bfe0052f807c] ...
	I0814 09:53:27.160290    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe0052f807c"
	I0814 09:53:27.171858    4019 logs.go:123] Gathering logs for kube-controller-manager [d4c01ab8fbcc] ...
	I0814 09:53:27.171868    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4c01ab8fbcc"
	I0814 09:53:27.203016    4019 logs.go:123] Gathering logs for storage-provisioner [8b4bf1194ad2] ...
	I0814 09:53:27.203030    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b4bf1194ad2"
	I0814 09:53:27.226185    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:53:27.226197    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:53:27.246495    4019 logs.go:123] Gathering logs for etcd [abfc7e49a585] ...
	I0814 09:53:27.246505    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abfc7e49a585"
	I0814 09:53:27.267111    4019 logs.go:123] Gathering logs for coredns [b48b5e6429a9] ...
	I0814 09:53:27.267137    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b48b5e6429a9"
	I0814 09:53:27.290138    4019 logs.go:123] Gathering logs for kube-scheduler [e3a20e2e124f] ...
	I0814 09:53:27.290151    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a20e2e124f"
	I0814 09:53:27.313992    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:53:27.314006    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:53:27.319661    4019 logs.go:123] Gathering logs for kube-apiserver [66f59011937b] ...
	I0814 09:53:27.319670    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f59011937b"
	I0814 09:53:27.340043    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:53:27.340055    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:53:27.364226    4019 out.go:304] Setting ErrFile to fd 2...
	I0814 09:53:27.364239    4019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0814 09:53:27.364278    4019 out.go:239] X Problems detected in kubelet:
	W0814 09:53:27.364284    4019 out.go:239]   Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	W0814 09:53:27.364288    4019 out.go:239]   Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	I0814 09:53:27.364292    4019 out.go:304] Setting ErrFile to fd 2...
	I0814 09:53:27.364294    4019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:53:30.362877    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:53:30.363078    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:53:30.378860    4033 logs.go:276] 1 containers: [741deb167866]
	I0814 09:53:30.378935    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:53:30.391234    4033 logs.go:276] 1 containers: [925172ee97cb]
	I0814 09:53:30.391305    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:53:30.402321    4033 logs.go:276] 2 containers: [ba7f625babd1 178c997768be]
	I0814 09:53:30.402393    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:53:30.413405    4033 logs.go:276] 1 containers: [2f06dc386c61]
	I0814 09:53:30.413472    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:53:30.423620    4033 logs.go:276] 1 containers: [4ffe967a65fd]
	I0814 09:53:30.423685    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:53:30.433946    4033 logs.go:276] 1 containers: [2d9d0cc2c886]
	I0814 09:53:30.434012    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:53:30.444728    4033 logs.go:276] 0 containers: []
	W0814 09:53:30.444737    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:53:30.444790    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:53:30.455385    4033 logs.go:276] 1 containers: [cd7351187c71]
	I0814 09:53:30.455399    4033 logs.go:123] Gathering logs for kube-proxy [4ffe967a65fd] ...
	I0814 09:53:30.455405    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ffe967a65fd"
	I0814 09:53:30.467084    4033 logs.go:123] Gathering logs for kube-controller-manager [2d9d0cc2c886] ...
	I0814 09:53:30.467097    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9d0cc2c886"
	I0814 09:53:30.484549    4033 logs.go:123] Gathering logs for storage-provisioner [cd7351187c71] ...
	I0814 09:53:30.484570    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7351187c71"
	I0814 09:53:30.496488    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:53:30.496501    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:53:30.521044    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:53:30.521056    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:53:30.560298    4033 logs.go:123] Gathering logs for kube-apiserver [741deb167866] ...
	I0814 09:53:30.560308    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 741deb167866"
	I0814 09:53:30.578976    4033 logs.go:123] Gathering logs for kube-scheduler [2f06dc386c61] ...
	I0814 09:53:30.578988    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f06dc386c61"
	I0814 09:53:30.594137    4033 logs.go:123] Gathering logs for coredns [ba7f625babd1] ...
	I0814 09:53:30.594146    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba7f625babd1"
	I0814 09:53:30.605872    4033 logs.go:123] Gathering logs for coredns [178c997768be] ...
	I0814 09:53:30.605883    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 178c997768be"
	I0814 09:53:30.617106    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:53:30.617116    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:53:30.628981    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:53:30.628990    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:53:30.665161    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:53:30.665169    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:53:30.669618    4033 logs.go:123] Gathering logs for etcd [925172ee97cb] ...
	I0814 09:53:30.669625    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925172ee97cb"
	I0814 09:53:33.183616    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:53:37.367964    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:53:38.185617    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:53:38.185778    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:53:38.198075    4033 logs.go:276] 1 containers: [741deb167866]
	I0814 09:53:38.198156    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:53:38.209425    4033 logs.go:276] 1 containers: [925172ee97cb]
	I0814 09:53:38.209496    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:53:38.220020    4033 logs.go:276] 2 containers: [ba7f625babd1 178c997768be]
	I0814 09:53:38.220091    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:53:38.230572    4033 logs.go:276] 1 containers: [2f06dc386c61]
	I0814 09:53:38.230639    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:53:38.241382    4033 logs.go:276] 1 containers: [4ffe967a65fd]
	I0814 09:53:38.241451    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:53:38.251803    4033 logs.go:276] 1 containers: [2d9d0cc2c886]
	I0814 09:53:38.251872    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:53:38.261772    4033 logs.go:276] 0 containers: []
	W0814 09:53:38.261790    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:53:38.261847    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:53:38.272537    4033 logs.go:276] 1 containers: [cd7351187c71]
	I0814 09:53:38.272552    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:53:38.272561    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:53:38.297508    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:53:38.297519    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:53:38.309620    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:53:38.309631    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:53:38.345094    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:53:38.345106    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:53:38.380567    4033 logs.go:123] Gathering logs for etcd [925172ee97cb] ...
	I0814 09:53:38.380580    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925172ee97cb"
	I0814 09:53:38.394719    4033 logs.go:123] Gathering logs for coredns [ba7f625babd1] ...
	I0814 09:53:38.394730    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba7f625babd1"
	I0814 09:53:38.406553    4033 logs.go:123] Gathering logs for storage-provisioner [cd7351187c71] ...
	I0814 09:53:38.406566    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7351187c71"
	I0814 09:53:38.418842    4033 logs.go:123] Gathering logs for kube-controller-manager [2d9d0cc2c886] ...
	I0814 09:53:38.418854    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9d0cc2c886"
	I0814 09:53:38.436780    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:53:38.436789    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:53:38.441297    4033 logs.go:123] Gathering logs for kube-apiserver [741deb167866] ...
	I0814 09:53:38.441306    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 741deb167866"
	I0814 09:53:38.456612    4033 logs.go:123] Gathering logs for coredns [178c997768be] ...
	I0814 09:53:38.456622    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 178c997768be"
	I0814 09:53:38.473117    4033 logs.go:123] Gathering logs for kube-scheduler [2f06dc386c61] ...
	I0814 09:53:38.473127    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f06dc386c61"
	I0814 09:53:38.487891    4033 logs.go:123] Gathering logs for kube-proxy [4ffe967a65fd] ...
	I0814 09:53:38.487903    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ffe967a65fd"
	I0814 09:53:42.369970    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:53:42.370139    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:53:42.382451    4019 logs.go:276] 1 containers: [66f59011937b]
	I0814 09:53:42.382528    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:53:42.395005    4019 logs.go:276] 1 containers: [abfc7e49a585]
	I0814 09:53:42.395088    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:53:42.408315    4019 logs.go:276] 4 containers: [f998ec6c5355 f1f20b457441 9c8867ac9a63 b48b5e6429a9]
	I0814 09:53:42.408389    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:53:42.421118    4019 logs.go:276] 1 containers: [e3a20e2e124f]
	I0814 09:53:42.421184    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:53:42.432538    4019 logs.go:276] 1 containers: [bfe0052f807c]
	I0814 09:53:42.432609    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:53:42.443500    4019 logs.go:276] 1 containers: [d4c01ab8fbcc]
	I0814 09:53:42.443574    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:53:42.454803    4019 logs.go:276] 0 containers: []
	W0814 09:53:42.454818    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:53:42.454876    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:53:42.465842    4019 logs.go:276] 1 containers: [8b4bf1194ad2]
	I0814 09:53:42.465861    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:53:42.465867    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:53:42.501998    4019 logs.go:123] Gathering logs for kube-controller-manager [d4c01ab8fbcc] ...
	I0814 09:53:42.502010    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4c01ab8fbcc"
	I0814 09:53:42.520991    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:53:42.521001    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0814 09:53:42.553258    4019 logs.go:138] Found kubelet problem: Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	W0814 09:53:42.553353    4019 logs.go:138] Found kubelet problem: Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	I0814 09:53:42.554488    4019 logs.go:123] Gathering logs for etcd [abfc7e49a585] ...
	I0814 09:53:42.554492    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abfc7e49a585"
	I0814 09:53:42.568660    4019 logs.go:123] Gathering logs for coredns [f998ec6c5355] ...
	I0814 09:53:42.568673    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f998ec6c5355"
	I0814 09:53:42.580453    4019 logs.go:123] Gathering logs for coredns [9c8867ac9a63] ...
	I0814 09:53:42.580465    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c8867ac9a63"
	I0814 09:53:42.593214    4019 logs.go:123] Gathering logs for kube-proxy [bfe0052f807c] ...
	I0814 09:53:42.593229    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe0052f807c"
	I0814 09:53:42.605531    4019 logs.go:123] Gathering logs for storage-provisioner [8b4bf1194ad2] ...
	I0814 09:53:42.605541    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b4bf1194ad2"
	I0814 09:53:42.617928    4019 logs.go:123] Gathering logs for coredns [f1f20b457441] ...
	I0814 09:53:42.617937    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1f20b457441"
	I0814 09:53:42.634058    4019 logs.go:123] Gathering logs for coredns [b48b5e6429a9] ...
	I0814 09:53:42.634067    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b48b5e6429a9"
	I0814 09:53:42.649274    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:53:42.649283    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:53:42.653484    4019 logs.go:123] Gathering logs for kube-apiserver [66f59011937b] ...
	I0814 09:53:42.653490    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f59011937b"
	I0814 09:53:42.667845    4019 logs.go:123] Gathering logs for kube-scheduler [e3a20e2e124f] ...
	I0814 09:53:42.667854    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a20e2e124f"
	I0814 09:53:42.683801    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:53:42.683812    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:53:42.707985    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:53:42.707992    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:53:42.720177    4019 out.go:304] Setting ErrFile to fd 2...
	I0814 09:53:42.720187    4019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0814 09:53:42.720213    4019 out.go:239] X Problems detected in kubelet:
	W0814 09:53:42.720222    4019 out.go:239]   Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	W0814 09:53:42.720226    4019 out.go:239]   Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	I0814 09:53:42.720230    4019 out.go:304] Setting ErrFile to fd 2...
	I0814 09:53:42.720233    4019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:53:41.001942    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:53:46.004143    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:53:46.004304    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:53:46.018703    4033 logs.go:276] 1 containers: [741deb167866]
	I0814 09:53:46.018784    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:53:46.029973    4033 logs.go:276] 1 containers: [925172ee97cb]
	I0814 09:53:46.030047    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:53:46.041318    4033 logs.go:276] 4 containers: [4e6f837b0ac6 b287389b4ec9 ba7f625babd1 178c997768be]
	I0814 09:53:46.041394    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:53:46.060212    4033 logs.go:276] 1 containers: [2f06dc386c61]
	I0814 09:53:46.060280    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:53:46.071068    4033 logs.go:276] 1 containers: [4ffe967a65fd]
	I0814 09:53:46.071139    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:53:46.089534    4033 logs.go:276] 1 containers: [2d9d0cc2c886]
	I0814 09:53:46.089599    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:53:46.100646    4033 logs.go:276] 0 containers: []
	W0814 09:53:46.100655    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:53:46.100704    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:53:46.111472    4033 logs.go:276] 1 containers: [cd7351187c71]
	I0814 09:53:46.111489    4033 logs.go:123] Gathering logs for storage-provisioner [cd7351187c71] ...
	I0814 09:53:46.111495    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7351187c71"
	I0814 09:53:46.123253    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:53:46.123264    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:53:46.158041    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:53:46.158048    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:53:46.162740    4033 logs.go:123] Gathering logs for kube-apiserver [741deb167866] ...
	I0814 09:53:46.162746    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 741deb167866"
	I0814 09:53:46.178668    4033 logs.go:123] Gathering logs for etcd [925172ee97cb] ...
	I0814 09:53:46.178678    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925172ee97cb"
	I0814 09:53:46.193125    4033 logs.go:123] Gathering logs for kube-scheduler [2f06dc386c61] ...
	I0814 09:53:46.193137    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f06dc386c61"
	I0814 09:53:46.212289    4033 logs.go:123] Gathering logs for kube-proxy [4ffe967a65fd] ...
	I0814 09:53:46.212299    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ffe967a65fd"
	I0814 09:53:46.224324    4033 logs.go:123] Gathering logs for kube-controller-manager [2d9d0cc2c886] ...
	I0814 09:53:46.224336    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9d0cc2c886"
	I0814 09:53:46.241424    4033 logs.go:123] Gathering logs for coredns [b287389b4ec9] ...
	I0814 09:53:46.241433    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b287389b4ec9"
	I0814 09:53:46.252957    4033 logs.go:123] Gathering logs for coredns [ba7f625babd1] ...
	I0814 09:53:46.252969    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba7f625babd1"
	I0814 09:53:46.264704    4033 logs.go:123] Gathering logs for coredns [178c997768be] ...
	I0814 09:53:46.264714    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 178c997768be"
	I0814 09:53:46.275919    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:53:46.275931    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:53:46.287499    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:53:46.287509    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:53:46.323580    4033 logs.go:123] Gathering logs for coredns [4e6f837b0ac6] ...
	I0814 09:53:46.323593    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e6f837b0ac6"
	I0814 09:53:46.335081    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:53:46.335092    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:53:48.860520    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:53:52.723969    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:53:53.862669    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:53:53.862801    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:53:53.876011    4033 logs.go:276] 1 containers: [741deb167866]
	I0814 09:53:53.876092    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:53:53.887355    4033 logs.go:276] 1 containers: [925172ee97cb]
	I0814 09:53:53.887427    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:53:53.898120    4033 logs.go:276] 4 containers: [4e6f837b0ac6 b287389b4ec9 ba7f625babd1 178c997768be]
	I0814 09:53:53.898195    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:53:53.908113    4033 logs.go:276] 1 containers: [2f06dc386c61]
	I0814 09:53:53.908176    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:53:53.918570    4033 logs.go:276] 1 containers: [4ffe967a65fd]
	I0814 09:53:53.918638    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:53:53.929075    4033 logs.go:276] 1 containers: [2d9d0cc2c886]
	I0814 09:53:53.929145    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:53:53.939929    4033 logs.go:276] 0 containers: []
	W0814 09:53:53.939940    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:53:53.939999    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:53:53.950073    4033 logs.go:276] 1 containers: [cd7351187c71]
	I0814 09:53:53.950090    4033 logs.go:123] Gathering logs for kube-proxy [4ffe967a65fd] ...
	I0814 09:53:53.950095    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ffe967a65fd"
	I0814 09:53:53.962251    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:53:53.962262    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:53:53.997637    4033 logs.go:123] Gathering logs for kube-apiserver [741deb167866] ...
	I0814 09:53:53.997645    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 741deb167866"
	I0814 09:53:54.011737    4033 logs.go:123] Gathering logs for coredns [ba7f625babd1] ...
	I0814 09:53:54.011747    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba7f625babd1"
	I0814 09:53:54.023603    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:53:54.023613    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:53:54.035493    4033 logs.go:123] Gathering logs for etcd [925172ee97cb] ...
	I0814 09:53:54.035504    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925172ee97cb"
	I0814 09:53:54.049356    4033 logs.go:123] Gathering logs for coredns [4e6f837b0ac6] ...
	I0814 09:53:54.049366    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e6f837b0ac6"
	I0814 09:53:54.060771    4033 logs.go:123] Gathering logs for coredns [b287389b4ec9] ...
	I0814 09:53:54.060782    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b287389b4ec9"
	I0814 09:53:54.072426    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:53:54.072439    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:53:54.096084    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:53:54.096092    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:53:54.140504    4033 logs.go:123] Gathering logs for kube-scheduler [2f06dc386c61] ...
	I0814 09:53:54.140516    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f06dc386c61"
	I0814 09:53:54.156650    4033 logs.go:123] Gathering logs for kube-controller-manager [2d9d0cc2c886] ...
	I0814 09:53:54.156662    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9d0cc2c886"
	I0814 09:53:54.177991    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:53:54.178003    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:53:54.182707    4033 logs.go:123] Gathering logs for coredns [178c997768be] ...
	I0814 09:53:54.182714    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 178c997768be"
	I0814 09:53:54.194496    4033 logs.go:123] Gathering logs for storage-provisioner [cd7351187c71] ...
	I0814 09:53:54.194508    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7351187c71"
	I0814 09:53:57.725992    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:53:57.726093    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:53:57.739061    4019 logs.go:276] 1 containers: [66f59011937b]
	I0814 09:53:57.739138    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:53:57.750495    4019 logs.go:276] 1 containers: [abfc7e49a585]
	I0814 09:53:57.750564    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:53:57.766267    4019 logs.go:276] 4 containers: [f998ec6c5355 f1f20b457441 9c8867ac9a63 b48b5e6429a9]
	I0814 09:53:57.766343    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:53:57.777643    4019 logs.go:276] 1 containers: [e3a20e2e124f]
	I0814 09:53:57.777710    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:53:57.789146    4019 logs.go:276] 1 containers: [bfe0052f807c]
	I0814 09:53:57.789218    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:53:57.800464    4019 logs.go:276] 1 containers: [d4c01ab8fbcc]
	I0814 09:53:57.800535    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:53:57.810756    4019 logs.go:276] 0 containers: []
	W0814 09:53:57.810766    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:53:57.810821    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:53:57.821987    4019 logs.go:276] 1 containers: [8b4bf1194ad2]
	I0814 09:53:57.822004    4019 logs.go:123] Gathering logs for coredns [f998ec6c5355] ...
	I0814 09:53:57.822013    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f998ec6c5355"
	I0814 09:53:57.835697    4019 logs.go:123] Gathering logs for kube-scheduler [e3a20e2e124f] ...
	I0814 09:53:57.835708    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a20e2e124f"
	I0814 09:53:57.851925    4019 logs.go:123] Gathering logs for storage-provisioner [8b4bf1194ad2] ...
	I0814 09:53:57.851940    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b4bf1194ad2"
	I0814 09:53:57.864430    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:53:57.864441    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:53:57.876896    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:53:57.876906    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:53:57.881744    4019 logs.go:123] Gathering logs for coredns [f1f20b457441] ...
	I0814 09:53:57.881751    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1f20b457441"
	I0814 09:53:57.893921    4019 logs.go:123] Gathering logs for coredns [9c8867ac9a63] ...
	I0814 09:53:57.893932    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c8867ac9a63"
	I0814 09:53:57.906469    4019 logs.go:123] Gathering logs for kube-controller-manager [d4c01ab8fbcc] ...
	I0814 09:53:57.906484    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4c01ab8fbcc"
	I0814 09:53:57.924437    4019 logs.go:123] Gathering logs for etcd [abfc7e49a585] ...
	I0814 09:53:57.924446    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abfc7e49a585"
	I0814 09:53:57.938657    4019 logs.go:123] Gathering logs for kube-apiserver [66f59011937b] ...
	I0814 09:53:57.938669    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f59011937b"
	I0814 09:53:57.953811    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:53:57.953822    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0814 09:53:57.987985    4019 logs.go:138] Found kubelet problem: Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	W0814 09:53:57.988080    4019 logs.go:138] Found kubelet problem: Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	I0814 09:53:57.989268    4019 logs.go:123] Gathering logs for coredns [b48b5e6429a9] ...
	I0814 09:53:57.989279    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b48b5e6429a9"
	I0814 09:53:58.002697    4019 logs.go:123] Gathering logs for kube-proxy [bfe0052f807c] ...
	I0814 09:53:58.002707    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe0052f807c"
	I0814 09:53:58.015181    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:53:58.015194    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:53:58.040514    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:53:58.040525    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:53:58.076138    4019 out.go:304] Setting ErrFile to fd 2...
	I0814 09:53:58.076149    4019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0814 09:53:58.076174    4019 out.go:239] X Problems detected in kubelet:
	W0814 09:53:58.076181    4019 out.go:239]   Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	W0814 09:53:58.076185    4019 out.go:239]   Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	I0814 09:53:58.076190    4019 out.go:304] Setting ErrFile to fd 2...
	I0814 09:53:58.076192    4019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:53:56.713881    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:54:01.716278    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:54:01.716493    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:54:01.734133    4033 logs.go:276] 1 containers: [741deb167866]
	I0814 09:54:01.734223    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:54:01.747915    4033 logs.go:276] 1 containers: [925172ee97cb]
	I0814 09:54:01.747991    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:54:01.759904    4033 logs.go:276] 4 containers: [4e6f837b0ac6 b287389b4ec9 ba7f625babd1 178c997768be]
	I0814 09:54:01.759976    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:54:01.773297    4033 logs.go:276] 1 containers: [2f06dc386c61]
	I0814 09:54:01.773369    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:54:01.784205    4033 logs.go:276] 1 containers: [4ffe967a65fd]
	I0814 09:54:01.784275    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:54:01.795172    4033 logs.go:276] 1 containers: [2d9d0cc2c886]
	I0814 09:54:01.795246    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:54:01.805460    4033 logs.go:276] 0 containers: []
	W0814 09:54:01.805477    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:54:01.805538    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:54:01.816813    4033 logs.go:276] 1 containers: [cd7351187c71]
	I0814 09:54:01.816831    4033 logs.go:123] Gathering logs for kube-apiserver [741deb167866] ...
	I0814 09:54:01.816837    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 741deb167866"
	I0814 09:54:01.834057    4033 logs.go:123] Gathering logs for coredns [ba7f625babd1] ...
	I0814 09:54:01.834070    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba7f625babd1"
	I0814 09:54:01.845858    4033 logs.go:123] Gathering logs for kube-proxy [4ffe967a65fd] ...
	I0814 09:54:01.845869    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ffe967a65fd"
	I0814 09:54:01.857882    4033 logs.go:123] Gathering logs for kube-controller-manager [2d9d0cc2c886] ...
	I0814 09:54:01.857895    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9d0cc2c886"
	I0814 09:54:01.875530    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:54:01.875543    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:54:01.900664    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:54:01.900671    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:54:01.939059    4033 logs.go:123] Gathering logs for etcd [925172ee97cb] ...
	I0814 09:54:01.939069    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925172ee97cb"
	I0814 09:54:01.952857    4033 logs.go:123] Gathering logs for kube-scheduler [2f06dc386c61] ...
	I0814 09:54:01.952872    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f06dc386c61"
	I0814 09:54:01.968280    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:54:01.968289    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:54:02.003738    4033 logs.go:123] Gathering logs for coredns [4e6f837b0ac6] ...
	I0814 09:54:02.003746    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e6f837b0ac6"
	I0814 09:54:02.017097    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:54:02.017107    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:54:02.028793    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:54:02.028803    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:54:02.034274    4033 logs.go:123] Gathering logs for coredns [178c997768be] ...
	I0814 09:54:02.034282    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 178c997768be"
	I0814 09:54:02.047581    4033 logs.go:123] Gathering logs for storage-provisioner [cd7351187c71] ...
	I0814 09:54:02.047592    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7351187c71"
	I0814 09:54:02.059200    4033 logs.go:123] Gathering logs for coredns [b287389b4ec9] ...
	I0814 09:54:02.059211    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b287389b4ec9"
	I0814 09:54:04.572530    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:54:08.079911    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:54:09.574675    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:54:09.574878    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:54:09.601333    4033 logs.go:276] 1 containers: [741deb167866]
	I0814 09:54:09.601415    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:54:09.614940    4033 logs.go:276] 1 containers: [925172ee97cb]
	I0814 09:54:09.615017    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:54:09.627430    4033 logs.go:276] 4 containers: [4e6f837b0ac6 b287389b4ec9 ba7f625babd1 178c997768be]
	I0814 09:54:09.627499    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:54:09.638436    4033 logs.go:276] 1 containers: [2f06dc386c61]
	I0814 09:54:09.638502    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:54:09.649158    4033 logs.go:276] 1 containers: [4ffe967a65fd]
	I0814 09:54:09.649232    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:54:09.659938    4033 logs.go:276] 1 containers: [2d9d0cc2c886]
	I0814 09:54:09.660029    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:54:09.670158    4033 logs.go:276] 0 containers: []
	W0814 09:54:09.670170    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:54:09.670232    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:54:09.680965    4033 logs.go:276] 1 containers: [cd7351187c71]
	I0814 09:54:09.680980    4033 logs.go:123] Gathering logs for coredns [4e6f837b0ac6] ...
	I0814 09:54:09.680985    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e6f837b0ac6"
	I0814 09:54:09.692096    4033 logs.go:123] Gathering logs for storage-provisioner [cd7351187c71] ...
	I0814 09:54:09.692106    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7351187c71"
	I0814 09:54:09.703618    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:54:09.703628    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:54:09.738457    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:54:09.738468    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:54:09.742745    4033 logs.go:123] Gathering logs for kube-apiserver [741deb167866] ...
	I0814 09:54:09.742752    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 741deb167866"
	I0814 09:54:09.757400    4033 logs.go:123] Gathering logs for etcd [925172ee97cb] ...
	I0814 09:54:09.757412    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925172ee97cb"
	I0814 09:54:09.771887    4033 logs.go:123] Gathering logs for coredns [ba7f625babd1] ...
	I0814 09:54:09.771896    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba7f625babd1"
	I0814 09:54:09.783407    4033 logs.go:123] Gathering logs for kube-scheduler [2f06dc386c61] ...
	I0814 09:54:09.783419    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f06dc386c61"
	I0814 09:54:09.798196    4033 logs.go:123] Gathering logs for kube-proxy [4ffe967a65fd] ...
	I0814 09:54:09.798207    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ffe967a65fd"
	I0814 09:54:09.810373    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:54:09.810386    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:54:09.822050    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:54:09.822061    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:54:09.857137    4033 logs.go:123] Gathering logs for coredns [b287389b4ec9] ...
	I0814 09:54:09.857147    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b287389b4ec9"
	I0814 09:54:09.869278    4033 logs.go:123] Gathering logs for coredns [178c997768be] ...
	I0814 09:54:09.869287    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 178c997768be"
	I0814 09:54:09.881618    4033 logs.go:123] Gathering logs for kube-controller-manager [2d9d0cc2c886] ...
	I0814 09:54:09.881630    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9d0cc2c886"
	I0814 09:54:09.898829    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:54:09.898839    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:54:13.081772    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:54:13.081948    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:54:13.102355    4019 logs.go:276] 1 containers: [66f59011937b]
	I0814 09:54:13.102479    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:54:13.120721    4019 logs.go:276] 1 containers: [abfc7e49a585]
	I0814 09:54:13.120791    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:54:13.132891    4019 logs.go:276] 4 containers: [f998ec6c5355 f1f20b457441 9c8867ac9a63 b48b5e6429a9]
	I0814 09:54:13.132959    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:54:13.143132    4019 logs.go:276] 1 containers: [e3a20e2e124f]
	I0814 09:54:13.143204    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:54:13.156228    4019 logs.go:276] 1 containers: [bfe0052f807c]
	I0814 09:54:13.156292    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:54:13.166941    4019 logs.go:276] 1 containers: [d4c01ab8fbcc]
	I0814 09:54:13.167005    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:54:13.182778    4019 logs.go:276] 0 containers: []
	W0814 09:54:13.182791    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:54:13.182843    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:54:13.193268    4019 logs.go:276] 1 containers: [8b4bf1194ad2]
	I0814 09:54:13.193288    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:54:13.193295    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:54:13.238917    4019 logs.go:123] Gathering logs for coredns [f1f20b457441] ...
	I0814 09:54:13.238927    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1f20b457441"
	I0814 09:54:13.254174    4019 logs.go:123] Gathering logs for kube-scheduler [e3a20e2e124f] ...
	I0814 09:54:13.254186    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a20e2e124f"
	I0814 09:54:13.269386    4019 logs.go:123] Gathering logs for kube-controller-manager [d4c01ab8fbcc] ...
	I0814 09:54:13.269397    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4c01ab8fbcc"
	I0814 09:54:13.287076    4019 logs.go:123] Gathering logs for kube-apiserver [66f59011937b] ...
	I0814 09:54:13.287089    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f59011937b"
	I0814 09:54:13.311332    4019 logs.go:123] Gathering logs for coredns [f998ec6c5355] ...
	I0814 09:54:13.311345    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f998ec6c5355"
	I0814 09:54:13.325520    4019 logs.go:123] Gathering logs for coredns [b48b5e6429a9] ...
	I0814 09:54:13.325531    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b48b5e6429a9"
	I0814 09:54:13.337534    4019 logs.go:123] Gathering logs for storage-provisioner [8b4bf1194ad2] ...
	I0814 09:54:13.337548    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b4bf1194ad2"
	I0814 09:54:13.349221    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:54:13.349236    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:54:13.375550    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:54:13.375563    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0814 09:54:13.411420    4019 logs.go:138] Found kubelet problem: Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	W0814 09:54:13.411516    4019 logs.go:138] Found kubelet problem: Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	I0814 09:54:13.412694    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:54:13.412699    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:54:13.417564    4019 logs.go:123] Gathering logs for kube-proxy [bfe0052f807c] ...
	I0814 09:54:13.417573    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe0052f807c"
	I0814 09:54:13.430165    4019 logs.go:123] Gathering logs for etcd [abfc7e49a585] ...
	I0814 09:54:13.430179    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abfc7e49a585"
	I0814 09:54:13.446253    4019 logs.go:123] Gathering logs for coredns [9c8867ac9a63] ...
	I0814 09:54:13.446267    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c8867ac9a63"
	I0814 09:54:13.458500    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:54:13.458515    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:54:13.469938    4019 out.go:304] Setting ErrFile to fd 2...
	I0814 09:54:13.469948    4019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0814 09:54:13.469975    4019 out.go:239] X Problems detected in kubelet:
	W0814 09:54:13.469981    4019 out.go:239]   Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	W0814 09:54:13.469984    4019 out.go:239]   Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	I0814 09:54:13.470000    4019 out.go:304] Setting ErrFile to fd 2...
	I0814 09:54:13.470002    4019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:54:12.425469    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:54:17.426076    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:54:17.426185    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:54:17.437496    4033 logs.go:276] 1 containers: [741deb167866]
	I0814 09:54:17.437571    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:54:17.452966    4033 logs.go:276] 1 containers: [925172ee97cb]
	I0814 09:54:17.453034    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:54:17.463596    4033 logs.go:276] 4 containers: [4e6f837b0ac6 b287389b4ec9 ba7f625babd1 178c997768be]
	I0814 09:54:17.463663    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:54:17.473993    4033 logs.go:276] 1 containers: [2f06dc386c61]
	I0814 09:54:17.474057    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:54:17.487227    4033 logs.go:276] 1 containers: [4ffe967a65fd]
	I0814 09:54:17.487295    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:54:17.497556    4033 logs.go:276] 1 containers: [2d9d0cc2c886]
	I0814 09:54:17.497622    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:54:17.508150    4033 logs.go:276] 0 containers: []
	W0814 09:54:17.508162    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:54:17.508228    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:54:17.523581    4033 logs.go:276] 1 containers: [cd7351187c71]
	I0814 09:54:17.523601    4033 logs.go:123] Gathering logs for coredns [4e6f837b0ac6] ...
	I0814 09:54:17.523607    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e6f837b0ac6"
	I0814 09:54:17.540463    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:54:17.540474    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:54:17.565843    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:54:17.565856    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:54:17.570098    4033 logs.go:123] Gathering logs for kube-scheduler [2f06dc386c61] ...
	I0814 09:54:17.570104    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f06dc386c61"
	I0814 09:54:17.584804    4033 logs.go:123] Gathering logs for kube-proxy [4ffe967a65fd] ...
	I0814 09:54:17.584817    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ffe967a65fd"
	I0814 09:54:17.596351    4033 logs.go:123] Gathering logs for storage-provisioner [cd7351187c71] ...
	I0814 09:54:17.596366    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7351187c71"
	I0814 09:54:17.607763    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:54:17.607777    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:54:17.619849    4033 logs.go:123] Gathering logs for coredns [178c997768be] ...
	I0814 09:54:17.619859    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 178c997768be"
	I0814 09:54:17.631643    4033 logs.go:123] Gathering logs for etcd [925172ee97cb] ...
	I0814 09:54:17.631657    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925172ee97cb"
	I0814 09:54:17.650828    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:54:17.650841    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:54:17.686550    4033 logs.go:123] Gathering logs for kube-apiserver [741deb167866] ...
	I0814 09:54:17.686561    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 741deb167866"
	I0814 09:54:17.700771    4033 logs.go:123] Gathering logs for coredns [b287389b4ec9] ...
	I0814 09:54:17.700784    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b287389b4ec9"
	I0814 09:54:17.712177    4033 logs.go:123] Gathering logs for coredns [ba7f625babd1] ...
	I0814 09:54:17.712188    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba7f625babd1"
	I0814 09:54:17.723741    4033 logs.go:123] Gathering logs for kube-controller-manager [2d9d0cc2c886] ...
	I0814 09:54:17.723752    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9d0cc2c886"
	I0814 09:54:17.741002    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:54:17.741012    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:54:20.276350    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:54:23.473649    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:54:25.278502    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:54:25.278763    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:54:25.298786    4033 logs.go:276] 1 containers: [741deb167866]
	I0814 09:54:25.298883    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:54:25.318362    4033 logs.go:276] 1 containers: [925172ee97cb]
	I0814 09:54:25.318445    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:54:25.330246    4033 logs.go:276] 4 containers: [4e6f837b0ac6 b287389b4ec9 ba7f625babd1 178c997768be]
	I0814 09:54:25.330315    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:54:25.340587    4033 logs.go:276] 1 containers: [2f06dc386c61]
	I0814 09:54:25.340654    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:54:25.351562    4033 logs.go:276] 1 containers: [4ffe967a65fd]
	I0814 09:54:25.351630    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:54:25.363652    4033 logs.go:276] 1 containers: [2d9d0cc2c886]
	I0814 09:54:25.363723    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:54:25.376491    4033 logs.go:276] 0 containers: []
	W0814 09:54:25.376504    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:54:25.376565    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:54:25.387438    4033 logs.go:276] 1 containers: [cd7351187c71]
	I0814 09:54:25.387455    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:54:25.387461    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:54:25.422059    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:54:25.422073    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:54:25.426340    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:54:25.426346    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:54:25.468606    4033 logs.go:123] Gathering logs for kube-controller-manager [2d9d0cc2c886] ...
	I0814 09:54:25.468620    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9d0cc2c886"
	I0814 09:54:25.486305    4033 logs.go:123] Gathering logs for coredns [b287389b4ec9] ...
	I0814 09:54:25.486315    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b287389b4ec9"
	I0814 09:54:25.497716    4033 logs.go:123] Gathering logs for coredns [ba7f625babd1] ...
	I0814 09:54:25.497726    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba7f625babd1"
	I0814 09:54:25.510090    4033 logs.go:123] Gathering logs for kube-proxy [4ffe967a65fd] ...
	I0814 09:54:25.510103    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ffe967a65fd"
	I0814 09:54:25.522793    4033 logs.go:123] Gathering logs for kube-apiserver [741deb167866] ...
	I0814 09:54:25.522807    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 741deb167866"
	I0814 09:54:25.537907    4033 logs.go:123] Gathering logs for etcd [925172ee97cb] ...
	I0814 09:54:25.537922    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925172ee97cb"
	I0814 09:54:25.557483    4033 logs.go:123] Gathering logs for coredns [178c997768be] ...
	I0814 09:54:25.557491    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 178c997768be"
	I0814 09:54:25.569433    4033 logs.go:123] Gathering logs for kube-scheduler [2f06dc386c61] ...
	I0814 09:54:25.569448    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f06dc386c61"
	I0814 09:54:25.584321    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:54:25.584335    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:54:25.609842    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:54:25.609850    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:54:25.622450    4033 logs.go:123] Gathering logs for coredns [4e6f837b0ac6] ...
	I0814 09:54:25.622464    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e6f837b0ac6"
	I0814 09:54:25.635041    4033 logs.go:123] Gathering logs for storage-provisioner [cd7351187c71] ...
	I0814 09:54:25.635055    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7351187c71"
	I0814 09:54:28.475624    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:54:28.475705    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:54:28.486908    4019 logs.go:276] 1 containers: [66f59011937b]
	I0814 09:54:28.486973    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:54:28.497766    4019 logs.go:276] 1 containers: [abfc7e49a585]
	I0814 09:54:28.497840    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:54:28.508184    4019 logs.go:276] 4 containers: [f998ec6c5355 f1f20b457441 9c8867ac9a63 b48b5e6429a9]
	I0814 09:54:28.508248    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:54:28.520686    4019 logs.go:276] 1 containers: [e3a20e2e124f]
	I0814 09:54:28.520746    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:54:28.531028    4019 logs.go:276] 1 containers: [bfe0052f807c]
	I0814 09:54:28.531097    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:54:28.541734    4019 logs.go:276] 1 containers: [d4c01ab8fbcc]
	I0814 09:54:28.541804    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:54:28.551860    4019 logs.go:276] 0 containers: []
	W0814 09:54:28.551870    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:54:28.551932    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:54:28.562258    4019 logs.go:276] 1 containers: [8b4bf1194ad2]
	I0814 09:54:28.562274    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:54:28.562280    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0814 09:54:28.594919    4019 logs.go:138] Found kubelet problem: Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	W0814 09:54:28.595012    4019 logs.go:138] Found kubelet problem: Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	I0814 09:54:28.596220    4019 logs.go:123] Gathering logs for coredns [f1f20b457441] ...
	I0814 09:54:28.596225    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1f20b457441"
	I0814 09:54:28.607885    4019 logs.go:123] Gathering logs for coredns [9c8867ac9a63] ...
	I0814 09:54:28.607896    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c8867ac9a63"
	I0814 09:54:28.619833    4019 logs.go:123] Gathering logs for coredns [b48b5e6429a9] ...
	I0814 09:54:28.619842    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b48b5e6429a9"
	I0814 09:54:28.631151    4019 logs.go:123] Gathering logs for kube-proxy [bfe0052f807c] ...
	I0814 09:54:28.631166    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe0052f807c"
	I0814 09:54:28.643308    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:54:28.643323    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:54:28.648121    4019 logs.go:123] Gathering logs for etcd [abfc7e49a585] ...
	I0814 09:54:28.648130    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abfc7e49a585"
	I0814 09:54:28.661955    4019 logs.go:123] Gathering logs for kube-controller-manager [d4c01ab8fbcc] ...
	I0814 09:54:28.661965    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4c01ab8fbcc"
	I0814 09:54:28.680609    4019 logs.go:123] Gathering logs for storage-provisioner [8b4bf1194ad2] ...
	I0814 09:54:28.680626    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b4bf1194ad2"
	I0814 09:54:28.692509    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:54:28.692519    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:54:28.726353    4019 logs.go:123] Gathering logs for coredns [f998ec6c5355] ...
	I0814 09:54:28.726370    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f998ec6c5355"
	I0814 09:54:28.742834    4019 logs.go:123] Gathering logs for kube-scheduler [e3a20e2e124f] ...
	I0814 09:54:28.742852    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a20e2e124f"
	I0814 09:54:28.758038    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:54:28.758047    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:54:28.781557    4019 logs.go:123] Gathering logs for kube-apiserver [66f59011937b] ...
	I0814 09:54:28.781566    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f59011937b"
	I0814 09:54:28.802176    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:54:28.802185    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:54:28.814018    4019 out.go:304] Setting ErrFile to fd 2...
	I0814 09:54:28.814027    4019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0814 09:54:28.814054    4019 out.go:239] X Problems detected in kubelet:
	W0814 09:54:28.814059    4019 out.go:239]   Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	W0814 09:54:28.814063    4019 out.go:239]   Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	I0814 09:54:28.814069    4019 out.go:304] Setting ErrFile to fd 2...
	I0814 09:54:28.814080    4019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:54:28.147019    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:54:33.149152    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:54:33.149373    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:54:33.169876    4033 logs.go:276] 1 containers: [741deb167866]
	I0814 09:54:33.169978    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:54:33.186248    4033 logs.go:276] 1 containers: [925172ee97cb]
	I0814 09:54:33.186324    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:54:33.197720    4033 logs.go:276] 4 containers: [4e6f837b0ac6 b287389b4ec9 ba7f625babd1 178c997768be]
	I0814 09:54:33.197803    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:54:33.214307    4033 logs.go:276] 1 containers: [2f06dc386c61]
	I0814 09:54:33.214385    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:54:33.225051    4033 logs.go:276] 1 containers: [4ffe967a65fd]
	I0814 09:54:33.225119    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:54:33.235604    4033 logs.go:276] 1 containers: [2d9d0cc2c886]
	I0814 09:54:33.235667    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:54:33.246122    4033 logs.go:276] 0 containers: []
	W0814 09:54:33.246132    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:54:33.246208    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:54:33.256382    4033 logs.go:276] 1 containers: [cd7351187c71]
	I0814 09:54:33.256399    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:54:33.256404    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:54:33.289821    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:54:33.289830    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:54:33.326862    4033 logs.go:123] Gathering logs for kube-apiserver [741deb167866] ...
	I0814 09:54:33.326874    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 741deb167866"
	I0814 09:54:33.341736    4033 logs.go:123] Gathering logs for coredns [4e6f837b0ac6] ...
	I0814 09:54:33.341747    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e6f837b0ac6"
	I0814 09:54:33.353784    4033 logs.go:123] Gathering logs for coredns [b287389b4ec9] ...
	I0814 09:54:33.353794    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b287389b4ec9"
	I0814 09:54:33.366620    4033 logs.go:123] Gathering logs for coredns [178c997768be] ...
	I0814 09:54:33.366631    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 178c997768be"
	I0814 09:54:33.386548    4033 logs.go:123] Gathering logs for etcd [925172ee97cb] ...
	I0814 09:54:33.386562    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925172ee97cb"
	I0814 09:54:33.401107    4033 logs.go:123] Gathering logs for kube-controller-manager [2d9d0cc2c886] ...
	I0814 09:54:33.401117    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9d0cc2c886"
	I0814 09:54:33.419628    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:54:33.419639    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:54:33.424679    4033 logs.go:123] Gathering logs for kube-scheduler [2f06dc386c61] ...
	I0814 09:54:33.424686    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f06dc386c61"
	I0814 09:54:33.439585    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:54:33.439596    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:54:33.465211    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:54:33.465219    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:54:33.476941    4033 logs.go:123] Gathering logs for coredns [ba7f625babd1] ...
	I0814 09:54:33.476952    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba7f625babd1"
	I0814 09:54:33.489065    4033 logs.go:123] Gathering logs for kube-proxy [4ffe967a65fd] ...
	I0814 09:54:33.489076    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ffe967a65fd"
	I0814 09:54:33.508679    4033 logs.go:123] Gathering logs for storage-provisioner [cd7351187c71] ...
	I0814 09:54:33.508689    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7351187c71"
	I0814 09:54:38.816138    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:54:36.028564    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:54:43.818219    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:54:43.818406    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:54:43.834441    4019 logs.go:276] 1 containers: [66f59011937b]
	I0814 09:54:43.834522    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:54:43.847859    4019 logs.go:276] 1 containers: [abfc7e49a585]
	I0814 09:54:43.847932    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:54:43.860251    4019 logs.go:276] 4 containers: [f998ec6c5355 f1f20b457441 9c8867ac9a63 b48b5e6429a9]
	I0814 09:54:43.860326    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:54:43.871264    4019 logs.go:276] 1 containers: [e3a20e2e124f]
	I0814 09:54:43.871333    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:54:43.882176    4019 logs.go:276] 1 containers: [bfe0052f807c]
	I0814 09:54:43.882250    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:54:43.893277    4019 logs.go:276] 1 containers: [d4c01ab8fbcc]
	I0814 09:54:43.893344    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:54:43.903983    4019 logs.go:276] 0 containers: []
	W0814 09:54:43.903992    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:54:43.904047    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:54:43.914374    4019 logs.go:276] 1 containers: [8b4bf1194ad2]
	I0814 09:54:43.914393    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:54:43.914399    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:54:43.918782    4019 logs.go:123] Gathering logs for kube-apiserver [66f59011937b] ...
	I0814 09:54:43.918789    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f59011937b"
	I0814 09:54:43.937592    4019 logs.go:123] Gathering logs for coredns [f1f20b457441] ...
	I0814 09:54:43.937605    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1f20b457441"
	I0814 09:54:43.948786    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:54:43.948797    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:54:43.985066    4019 logs.go:123] Gathering logs for coredns [9c8867ac9a63] ...
	I0814 09:54:43.985080    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c8867ac9a63"
	I0814 09:54:43.997342    4019 logs.go:123] Gathering logs for coredns [b48b5e6429a9] ...
	I0814 09:54:43.997353    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b48b5e6429a9"
	I0814 09:54:44.008681    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:54:44.008694    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:54:44.034156    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:54:44.034168    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0814 09:54:44.066969    4019 logs.go:138] Found kubelet problem: Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	W0814 09:54:44.067062    4019 logs.go:138] Found kubelet problem: Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	I0814 09:54:44.068237    4019 logs.go:123] Gathering logs for etcd [abfc7e49a585] ...
	I0814 09:54:44.068243    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abfc7e49a585"
	I0814 09:54:44.082869    4019 logs.go:123] Gathering logs for coredns [f998ec6c5355] ...
	I0814 09:54:44.082880    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f998ec6c5355"
	I0814 09:54:44.094363    4019 logs.go:123] Gathering logs for kube-scheduler [e3a20e2e124f] ...
	I0814 09:54:44.094372    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a20e2e124f"
	I0814 09:54:44.109307    4019 logs.go:123] Gathering logs for kube-proxy [bfe0052f807c] ...
	I0814 09:54:44.109319    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe0052f807c"
	I0814 09:54:44.121235    4019 logs.go:123] Gathering logs for kube-controller-manager [d4c01ab8fbcc] ...
	I0814 09:54:44.121247    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4c01ab8fbcc"
	I0814 09:54:44.138538    4019 logs.go:123] Gathering logs for storage-provisioner [8b4bf1194ad2] ...
	I0814 09:54:44.138548    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b4bf1194ad2"
	I0814 09:54:44.150198    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:54:44.150208    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:54:44.161517    4019 out.go:304] Setting ErrFile to fd 2...
	I0814 09:54:44.161526    4019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0814 09:54:44.161555    4019 out.go:239] X Problems detected in kubelet:
	W0814 09:54:44.161559    4019 out.go:239]   Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	W0814 09:54:44.161563    4019 out.go:239]   Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	I0814 09:54:44.161568    4019 out.go:304] Setting ErrFile to fd 2...
	I0814 09:54:44.161571    4019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:54:41.030642    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:54:41.030887    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:54:41.049840    4033 logs.go:276] 1 containers: [741deb167866]
	I0814 09:54:41.049931    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:54:41.062119    4033 logs.go:276] 1 containers: [925172ee97cb]
	I0814 09:54:41.062195    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:54:41.074676    4033 logs.go:276] 4 containers: [4e6f837b0ac6 b287389b4ec9 ba7f625babd1 178c997768be]
	I0814 09:54:41.074745    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:54:41.085236    4033 logs.go:276] 1 containers: [2f06dc386c61]
	I0814 09:54:41.085305    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:54:41.095792    4033 logs.go:276] 1 containers: [4ffe967a65fd]
	I0814 09:54:41.095854    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:54:41.106053    4033 logs.go:276] 1 containers: [2d9d0cc2c886]
	I0814 09:54:41.106127    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:54:41.120590    4033 logs.go:276] 0 containers: []
	W0814 09:54:41.120604    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:54:41.120667    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:54:41.131802    4033 logs.go:276] 1 containers: [cd7351187c71]
	I0814 09:54:41.131818    4033 logs.go:123] Gathering logs for kube-scheduler [2f06dc386c61] ...
	I0814 09:54:41.131823    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f06dc386c61"
	I0814 09:54:41.147047    4033 logs.go:123] Gathering logs for kube-proxy [4ffe967a65fd] ...
	I0814 09:54:41.147058    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ffe967a65fd"
	I0814 09:54:41.166691    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:54:41.166701    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:54:41.178277    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:54:41.178288    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:54:41.213219    4033 logs.go:123] Gathering logs for coredns [4e6f837b0ac6] ...
	I0814 09:54:41.213232    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e6f837b0ac6"
	I0814 09:54:41.225223    4033 logs.go:123] Gathering logs for coredns [178c997768be] ...
	I0814 09:54:41.225234    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 178c997768be"
	I0814 09:54:41.244862    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:54:41.244875    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:54:41.270288    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:54:41.270298    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:54:41.305323    4033 logs.go:123] Gathering logs for kube-apiserver [741deb167866] ...
	I0814 09:54:41.305336    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 741deb167866"
	I0814 09:54:41.321905    4033 logs.go:123] Gathering logs for etcd [925172ee97cb] ...
	I0814 09:54:41.321920    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925172ee97cb"
	I0814 09:54:41.336154    4033 logs.go:123] Gathering logs for coredns [ba7f625babd1] ...
	I0814 09:54:41.336170    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba7f625babd1"
	I0814 09:54:41.348199    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:54:41.348209    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:54:41.353052    4033 logs.go:123] Gathering logs for coredns [b287389b4ec9] ...
	I0814 09:54:41.353060    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b287389b4ec9"
	I0814 09:54:41.367950    4033 logs.go:123] Gathering logs for kube-controller-manager [2d9d0cc2c886] ...
	I0814 09:54:41.367963    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9d0cc2c886"
	I0814 09:54:41.385895    4033 logs.go:123] Gathering logs for storage-provisioner [cd7351187c71] ...
	I0814 09:54:41.385905    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7351187c71"
	I0814 09:54:43.901193    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:54:48.903235    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:54:48.903478    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:54:48.928799    4033 logs.go:276] 1 containers: [741deb167866]
	I0814 09:54:48.928908    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:54:48.945401    4033 logs.go:276] 1 containers: [925172ee97cb]
	I0814 09:54:48.945487    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:54:48.959177    4033 logs.go:276] 4 containers: [4e6f837b0ac6 b287389b4ec9 ba7f625babd1 178c997768be]
	I0814 09:54:48.959256    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:54:48.970725    4033 logs.go:276] 1 containers: [2f06dc386c61]
	I0814 09:54:48.970802    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:54:48.980830    4033 logs.go:276] 1 containers: [4ffe967a65fd]
	I0814 09:54:48.980900    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:54:48.991926    4033 logs.go:276] 1 containers: [2d9d0cc2c886]
	I0814 09:54:48.991995    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:54:49.001959    4033 logs.go:276] 0 containers: []
	W0814 09:54:49.001970    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:54:49.002027    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:54:49.012310    4033 logs.go:276] 1 containers: [cd7351187c71]
	I0814 09:54:49.012326    4033 logs.go:123] Gathering logs for etcd [925172ee97cb] ...
	I0814 09:54:49.012331    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925172ee97cb"
	I0814 09:54:49.026477    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:54:49.026488    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:54:49.061317    4033 logs.go:123] Gathering logs for coredns [4e6f837b0ac6] ...
	I0814 09:54:49.061330    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e6f837b0ac6"
	I0814 09:54:49.073552    4033 logs.go:123] Gathering logs for coredns [178c997768be] ...
	I0814 09:54:49.073567    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 178c997768be"
	I0814 09:54:49.085586    4033 logs.go:123] Gathering logs for kube-controller-manager [2d9d0cc2c886] ...
	I0814 09:54:49.085598    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9d0cc2c886"
	I0814 09:54:49.109459    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:54:49.109470    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:54:49.121640    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:54:49.121652    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:54:49.157125    4033 logs.go:123] Gathering logs for kube-apiserver [741deb167866] ...
	I0814 09:54:49.157135    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 741deb167866"
	I0814 09:54:49.171269    4033 logs.go:123] Gathering logs for kube-proxy [4ffe967a65fd] ...
	I0814 09:54:49.171279    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ffe967a65fd"
	I0814 09:54:49.182567    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:54:49.182578    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:54:49.186931    4033 logs.go:123] Gathering logs for coredns [ba7f625babd1] ...
	I0814 09:54:49.186938    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba7f625babd1"
	I0814 09:54:49.199002    4033 logs.go:123] Gathering logs for kube-scheduler [2f06dc386c61] ...
	I0814 09:54:49.199013    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f06dc386c61"
	I0814 09:54:49.214339    4033 logs.go:123] Gathering logs for storage-provisioner [cd7351187c71] ...
	I0814 09:54:49.214349    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7351187c71"
	I0814 09:54:49.226166    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:54:49.226176    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:54:49.249320    4033 logs.go:123] Gathering logs for coredns [b287389b4ec9] ...
	I0814 09:54:49.249328    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b287389b4ec9"
	I0814 09:54:54.165231    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:54:51.762892    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:54:59.167290    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:54:59.167432    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:54:59.182437    4019 logs.go:276] 1 containers: [66f59011937b]
	I0814 09:54:59.182535    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:54:59.193770    4019 logs.go:276] 1 containers: [abfc7e49a585]
	I0814 09:54:59.193842    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:54:59.204933    4019 logs.go:276] 4 containers: [f998ec6c5355 f1f20b457441 9c8867ac9a63 b48b5e6429a9]
	I0814 09:54:59.205003    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:54:59.215343    4019 logs.go:276] 1 containers: [e3a20e2e124f]
	I0814 09:54:59.215402    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:54:59.226992    4019 logs.go:276] 1 containers: [bfe0052f807c]
	I0814 09:54:59.227060    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:54:59.237650    4019 logs.go:276] 1 containers: [d4c01ab8fbcc]
	I0814 09:54:59.237724    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:54:59.247732    4019 logs.go:276] 0 containers: []
	W0814 09:54:59.247745    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:54:59.247796    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:54:59.258638    4019 logs.go:276] 1 containers: [8b4bf1194ad2]
	I0814 09:54:59.258651    4019 logs.go:123] Gathering logs for kube-apiserver [66f59011937b] ...
	I0814 09:54:59.258657    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f59011937b"
	I0814 09:54:59.272852    4019 logs.go:123] Gathering logs for kube-proxy [bfe0052f807c] ...
	I0814 09:54:59.272862    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe0052f807c"
	I0814 09:54:59.284515    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:54:59.284524    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0814 09:54:59.317960    4019 logs.go:138] Found kubelet problem: Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	W0814 09:54:59.318059    4019 logs.go:138] Found kubelet problem: Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	I0814 09:54:59.319275    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:54:59.319283    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:54:59.354843    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:54:59.354854    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:54:59.366788    4019 logs.go:123] Gathering logs for coredns [b48b5e6429a9] ...
	I0814 09:54:59.366798    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b48b5e6429a9"
	I0814 09:54:59.379021    4019 logs.go:123] Gathering logs for kube-scheduler [e3a20e2e124f] ...
	I0814 09:54:59.379031    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a20e2e124f"
	I0814 09:54:59.394706    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:54:59.394718    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:54:59.420046    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:54:59.420054    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:54:59.424401    4019 logs.go:123] Gathering logs for coredns [f998ec6c5355] ...
	I0814 09:54:59.424409    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f998ec6c5355"
	I0814 09:54:59.437431    4019 logs.go:123] Gathering logs for coredns [9c8867ac9a63] ...
	I0814 09:54:59.437442    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c8867ac9a63"
	I0814 09:54:59.449813    4019 logs.go:123] Gathering logs for kube-controller-manager [d4c01ab8fbcc] ...
	I0814 09:54:59.449825    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4c01ab8fbcc"
	I0814 09:54:56.765185    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:54:56.765448    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:54:56.788070    4033 logs.go:276] 1 containers: [741deb167866]
	I0814 09:54:56.788165    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:54:56.803262    4033 logs.go:276] 1 containers: [925172ee97cb]
	I0814 09:54:56.803344    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:54:56.816062    4033 logs.go:276] 4 containers: [4e6f837b0ac6 b287389b4ec9 ba7f625babd1 178c997768be]
	I0814 09:54:56.816137    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:54:56.829680    4033 logs.go:276] 1 containers: [2f06dc386c61]
	I0814 09:54:56.829756    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:54:56.839988    4033 logs.go:276] 1 containers: [4ffe967a65fd]
	I0814 09:54:56.840061    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:54:56.850204    4033 logs.go:276] 1 containers: [2d9d0cc2c886]
	I0814 09:54:56.850277    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:54:56.862043    4033 logs.go:276] 0 containers: []
	W0814 09:54:56.862059    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:54:56.862121    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:54:56.876172    4033 logs.go:276] 1 containers: [cd7351187c71]
	I0814 09:54:56.876192    4033 logs.go:123] Gathering logs for coredns [b287389b4ec9] ...
	I0814 09:54:56.876198    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b287389b4ec9"
	I0814 09:54:56.888450    4033 logs.go:123] Gathering logs for coredns [4e6f837b0ac6] ...
	I0814 09:54:56.888461    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e6f837b0ac6"
	I0814 09:54:56.900442    4033 logs.go:123] Gathering logs for coredns [178c997768be] ...
	I0814 09:54:56.900451    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 178c997768be"
	I0814 09:54:56.912073    4033 logs.go:123] Gathering logs for kube-proxy [4ffe967a65fd] ...
	I0814 09:54:56.912084    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ffe967a65fd"
	I0814 09:54:56.946504    4033 logs.go:123] Gathering logs for storage-provisioner [cd7351187c71] ...
	I0814 09:54:56.946514    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7351187c71"
	I0814 09:54:56.958259    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:54:56.958273    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:54:56.991886    4033 logs.go:123] Gathering logs for kube-apiserver [741deb167866] ...
	I0814 09:54:56.991894    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 741deb167866"
	I0814 09:54:57.005986    4033 logs.go:123] Gathering logs for etcd [925172ee97cb] ...
	I0814 09:54:57.005995    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925172ee97cb"
	I0814 09:54:57.019883    4033 logs.go:123] Gathering logs for kube-scheduler [2f06dc386c61] ...
	I0814 09:54:57.019897    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f06dc386c61"
	I0814 09:54:57.035747    4033 logs.go:123] Gathering logs for kube-controller-manager [2d9d0cc2c886] ...
	I0814 09:54:57.035756    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9d0cc2c886"
	I0814 09:54:57.053496    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:54:57.053506    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:54:57.057693    4033 logs.go:123] Gathering logs for coredns [ba7f625babd1] ...
	I0814 09:54:57.057701    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba7f625babd1"
	I0814 09:54:57.069377    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:54:57.069390    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:54:57.093974    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:54:57.093982    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:54:57.105329    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:54:57.105340    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:54:59.642614    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:54:59.467277    4019 logs.go:123] Gathering logs for storage-provisioner [8b4bf1194ad2] ...
	I0814 09:54:59.467287    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b4bf1194ad2"
	I0814 09:54:59.478887    4019 logs.go:123] Gathering logs for etcd [abfc7e49a585] ...
	I0814 09:54:59.478897    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abfc7e49a585"
	I0814 09:54:59.493316    4019 logs.go:123] Gathering logs for coredns [f1f20b457441] ...
	I0814 09:54:59.493329    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1f20b457441"
	I0814 09:54:59.504677    4019 out.go:304] Setting ErrFile to fd 2...
	I0814 09:54:59.504687    4019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0814 09:54:59.504712    4019 out.go:239] X Problems detected in kubelet:
	W0814 09:54:59.504717    4019 out.go:239]   Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	W0814 09:54:59.504720    4019 out.go:239]   Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	I0814 09:54:59.504724    4019 out.go:304] Setting ErrFile to fd 2...
	I0814 09:54:59.504726    4019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:55:04.644643    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:55:04.644760    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:55:04.655928    4033 logs.go:276] 1 containers: [741deb167866]
	I0814 09:55:04.655995    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:55:04.667159    4033 logs.go:276] 1 containers: [925172ee97cb]
	I0814 09:55:04.667246    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:55:04.681773    4033 logs.go:276] 4 containers: [4e6f837b0ac6 b287389b4ec9 ba7f625babd1 178c997768be]
	I0814 09:55:04.681857    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:55:04.692711    4033 logs.go:276] 1 containers: [2f06dc386c61]
	I0814 09:55:04.692769    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:55:04.703287    4033 logs.go:276] 1 containers: [4ffe967a65fd]
	I0814 09:55:04.703346    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:55:04.714950    4033 logs.go:276] 1 containers: [2d9d0cc2c886]
	I0814 09:55:04.715007    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:55:04.725614    4033 logs.go:276] 0 containers: []
	W0814 09:55:04.725626    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:55:04.725684    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:55:04.736157    4033 logs.go:276] 1 containers: [cd7351187c71]
	I0814 09:55:04.736174    4033 logs.go:123] Gathering logs for coredns [ba7f625babd1] ...
	I0814 09:55:04.736179    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba7f625babd1"
	I0814 09:55:04.748393    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:55:04.748406    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:55:04.760885    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:55:04.760898    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:55:04.794721    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:55:04.794730    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:55:04.834387    4033 logs.go:123] Gathering logs for etcd [925172ee97cb] ...
	I0814 09:55:04.834398    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925172ee97cb"
	I0814 09:55:04.853059    4033 logs.go:123] Gathering logs for coredns [4e6f837b0ac6] ...
	I0814 09:55:04.853072    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e6f837b0ac6"
	I0814 09:55:04.865286    4033 logs.go:123] Gathering logs for kube-controller-manager [2d9d0cc2c886] ...
	I0814 09:55:04.865298    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9d0cc2c886"
	I0814 09:55:04.882379    4033 logs.go:123] Gathering logs for storage-provisioner [cd7351187c71] ...
	I0814 09:55:04.882389    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7351187c71"
	I0814 09:55:04.896276    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:55:04.896288    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:55:04.901171    4033 logs.go:123] Gathering logs for coredns [b287389b4ec9] ...
	I0814 09:55:04.901180    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b287389b4ec9"
	I0814 09:55:04.919373    4033 logs.go:123] Gathering logs for coredns [178c997768be] ...
	I0814 09:55:04.919387    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 178c997768be"
	I0814 09:55:04.931544    4033 logs.go:123] Gathering logs for kube-proxy [4ffe967a65fd] ...
	I0814 09:55:04.931558    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ffe967a65fd"
	I0814 09:55:04.943681    4033 logs.go:123] Gathering logs for kube-apiserver [741deb167866] ...
	I0814 09:55:04.943693    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 741deb167866"
	I0814 09:55:04.957962    4033 logs.go:123] Gathering logs for kube-scheduler [2f06dc386c61] ...
	I0814 09:55:04.957972    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f06dc386c61"
	I0814 09:55:04.973024    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:55:04.973034    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:55:07.498432    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:55:09.508456    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:55:12.500573    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:55:12.500695    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:55:12.514094    4033 logs.go:276] 1 containers: [741deb167866]
	I0814 09:55:12.514163    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:55:12.527429    4033 logs.go:276] 1 containers: [925172ee97cb]
	I0814 09:55:12.527496    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:55:12.538087    4033 logs.go:276] 4 containers: [4e6f837b0ac6 b287389b4ec9 ba7f625babd1 178c997768be]
	I0814 09:55:12.538158    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:55:12.548754    4033 logs.go:276] 1 containers: [2f06dc386c61]
	I0814 09:55:12.548826    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:55:12.559083    4033 logs.go:276] 1 containers: [4ffe967a65fd]
	I0814 09:55:12.559144    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:55:12.572864    4033 logs.go:276] 1 containers: [2d9d0cc2c886]
	I0814 09:55:12.572929    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:55:12.588840    4033 logs.go:276] 0 containers: []
	W0814 09:55:12.588854    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:55:12.588909    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:55:12.599307    4033 logs.go:276] 1 containers: [cd7351187c71]
	I0814 09:55:12.599323    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:55:12.599328    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:55:12.633344    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:55:12.633351    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:55:12.668531    4033 logs.go:123] Gathering logs for kube-controller-manager [2d9d0cc2c886] ...
	I0814 09:55:12.668541    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9d0cc2c886"
	I0814 09:55:12.685986    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:55:12.685997    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:55:12.697882    4033 logs.go:123] Gathering logs for coredns [ba7f625babd1] ...
	I0814 09:55:12.697892    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba7f625babd1"
	I0814 09:55:12.710219    4033 logs.go:123] Gathering logs for coredns [178c997768be] ...
	I0814 09:55:12.710228    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 178c997768be"
	I0814 09:55:12.721875    4033 logs.go:123] Gathering logs for kube-scheduler [2f06dc386c61] ...
	I0814 09:55:12.721887    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f06dc386c61"
	I0814 09:55:12.736599    4033 logs.go:123] Gathering logs for kube-proxy [4ffe967a65fd] ...
	I0814 09:55:12.736608    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ffe967a65fd"
	I0814 09:55:12.748353    4033 logs.go:123] Gathering logs for storage-provisioner [cd7351187c71] ...
	I0814 09:55:12.748365    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7351187c71"
	I0814 09:55:12.760178    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:55:12.760189    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:55:12.765052    4033 logs.go:123] Gathering logs for etcd [925172ee97cb] ...
	I0814 09:55:12.765062    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925172ee97cb"
	I0814 09:55:12.779032    4033 logs.go:123] Gathering logs for coredns [b287389b4ec9] ...
	I0814 09:55:12.779041    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b287389b4ec9"
	I0814 09:55:12.790495    4033 logs.go:123] Gathering logs for kube-apiserver [741deb167866] ...
	I0814 09:55:12.790505    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 741deb167866"
	I0814 09:55:12.804235    4033 logs.go:123] Gathering logs for coredns [4e6f837b0ac6] ...
	I0814 09:55:12.804243    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e6f837b0ac6"
	I0814 09:55:12.815727    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:55:12.815737    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:55:15.342495    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:55:14.510536    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:55:14.510748    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:55:14.541219    4019 logs.go:276] 1 containers: [66f59011937b]
	I0814 09:55:14.541320    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:55:14.558548    4019 logs.go:276] 1 containers: [abfc7e49a585]
	I0814 09:55:14.558631    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:55:14.578310    4019 logs.go:276] 4 containers: [f998ec6c5355 f1f20b457441 9c8867ac9a63 b48b5e6429a9]
	I0814 09:55:14.578385    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:55:14.589482    4019 logs.go:276] 1 containers: [e3a20e2e124f]
	I0814 09:55:14.589548    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:55:14.599704    4019 logs.go:276] 1 containers: [bfe0052f807c]
	I0814 09:55:14.599783    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:55:14.610567    4019 logs.go:276] 1 containers: [d4c01ab8fbcc]
	I0814 09:55:14.610651    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:55:14.624967    4019 logs.go:276] 0 containers: []
	W0814 09:55:14.624978    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:55:14.625044    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:55:14.635501    4019 logs.go:276] 1 containers: [8b4bf1194ad2]
	I0814 09:55:14.635516    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:55:14.635521    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0814 09:55:14.667665    4019 logs.go:138] Found kubelet problem: Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	W0814 09:55:14.667759    4019 logs.go:138] Found kubelet problem: Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	I0814 09:55:14.668901    4019 logs.go:123] Gathering logs for etcd [abfc7e49a585] ...
	I0814 09:55:14.668907    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abfc7e49a585"
	I0814 09:55:14.682657    4019 logs.go:123] Gathering logs for coredns [f998ec6c5355] ...
	I0814 09:55:14.682667    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f998ec6c5355"
	I0814 09:55:14.694573    4019 logs.go:123] Gathering logs for kube-controller-manager [d4c01ab8fbcc] ...
	I0814 09:55:14.694584    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4c01ab8fbcc"
	I0814 09:55:14.715890    4019 logs.go:123] Gathering logs for storage-provisioner [8b4bf1194ad2] ...
	I0814 09:55:14.715899    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b4bf1194ad2"
	I0814 09:55:14.727509    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:55:14.727519    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:55:14.732195    4019 logs.go:123] Gathering logs for coredns [9c8867ac9a63] ...
	I0814 09:55:14.732200    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c8867ac9a63"
	I0814 09:55:14.744205    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:55:14.744215    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:55:14.767026    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:55:14.767036    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:55:14.779654    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:55:14.779666    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:55:14.814752    4019 logs.go:123] Gathering logs for kube-apiserver [66f59011937b] ...
	I0814 09:55:14.814763    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f59011937b"
	I0814 09:55:14.829962    4019 logs.go:123] Gathering logs for coredns [f1f20b457441] ...
	I0814 09:55:14.829972    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1f20b457441"
	I0814 09:55:14.841509    4019 logs.go:123] Gathering logs for coredns [b48b5e6429a9] ...
	I0814 09:55:14.841519    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b48b5e6429a9"
	I0814 09:55:14.852924    4019 logs.go:123] Gathering logs for kube-scheduler [e3a20e2e124f] ...
	I0814 09:55:14.852933    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a20e2e124f"
	I0814 09:55:14.867522    4019 logs.go:123] Gathering logs for kube-proxy [bfe0052f807c] ...
	I0814 09:55:14.867532    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe0052f807c"
	I0814 09:55:14.879786    4019 out.go:304] Setting ErrFile to fd 2...
	I0814 09:55:14.879797    4019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0814 09:55:14.879828    4019 out.go:239] X Problems detected in kubelet:
	W0814 09:55:14.879833    4019 out.go:239]   Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	W0814 09:55:14.879837    4019 out.go:239]   Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	I0814 09:55:14.879842    4019 out.go:304] Setting ErrFile to fd 2...
	I0814 09:55:14.879845    4019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:55:20.344611    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:55:20.344787    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:55:20.365998    4033 logs.go:276] 1 containers: [741deb167866]
	I0814 09:55:20.366092    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:55:20.380769    4033 logs.go:276] 1 containers: [925172ee97cb]
	I0814 09:55:20.380844    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:55:20.393744    4033 logs.go:276] 4 containers: [4e6f837b0ac6 b287389b4ec9 ba7f625babd1 178c997768be]
	I0814 09:55:20.393834    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:55:20.404670    4033 logs.go:276] 1 containers: [2f06dc386c61]
	I0814 09:55:20.404746    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:55:20.415329    4033 logs.go:276] 1 containers: [4ffe967a65fd]
	I0814 09:55:20.415397    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:55:20.425775    4033 logs.go:276] 1 containers: [2d9d0cc2c886]
	I0814 09:55:20.425844    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:55:20.441975    4033 logs.go:276] 0 containers: []
	W0814 09:55:20.441987    4033 logs.go:278] No container was found matching "kindnet"
	I0814 09:55:20.442044    4033 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:55:20.453033    4033 logs.go:276] 1 containers: [cd7351187c71]
	I0814 09:55:20.453049    4033 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:55:20.453055    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:55:20.488495    4033 logs.go:123] Gathering logs for etcd [925172ee97cb] ...
	I0814 09:55:20.488510    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 925172ee97cb"
	I0814 09:55:20.510319    4033 logs.go:123] Gathering logs for coredns [ba7f625babd1] ...
	I0814 09:55:20.510334    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba7f625babd1"
	I0814 09:55:20.521848    4033 logs.go:123] Gathering logs for kube-scheduler [2f06dc386c61] ...
	I0814 09:55:20.521859    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f06dc386c61"
	I0814 09:55:20.536588    4033 logs.go:123] Gathering logs for container status ...
	I0814 09:55:20.536600    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:55:20.549009    4033 logs.go:123] Gathering logs for dmesg ...
	I0814 09:55:20.549021    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:55:20.553376    4033 logs.go:123] Gathering logs for kubelet ...
	I0814 09:55:20.553386    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:55:20.587174    4033 logs.go:123] Gathering logs for coredns [4e6f837b0ac6] ...
	I0814 09:55:20.587190    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e6f837b0ac6"
	I0814 09:55:20.599599    4033 logs.go:123] Gathering logs for kube-proxy [4ffe967a65fd] ...
	I0814 09:55:20.599610    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ffe967a65fd"
	I0814 09:55:20.611413    4033 logs.go:123] Gathering logs for storage-provisioner [cd7351187c71] ...
	I0814 09:55:20.611425    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7351187c71"
	I0814 09:55:20.622654    4033 logs.go:123] Gathering logs for kube-apiserver [741deb167866] ...
	I0814 09:55:20.622666    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 741deb167866"
	I0814 09:55:20.636694    4033 logs.go:123] Gathering logs for coredns [b287389b4ec9] ...
	I0814 09:55:20.636709    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b287389b4ec9"
	I0814 09:55:20.648357    4033 logs.go:123] Gathering logs for coredns [178c997768be] ...
	I0814 09:55:20.648368    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 178c997768be"
	I0814 09:55:20.660988    4033 logs.go:123] Gathering logs for kube-controller-manager [2d9d0cc2c886] ...
	I0814 09:55:20.660998    4033 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d9d0cc2c886"
	I0814 09:55:20.679003    4033 logs.go:123] Gathering logs for Docker ...
	I0814 09:55:20.679018    4033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:55:23.205473    4033 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:55:28.207571    4033 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:55:28.210186    4033 out.go:177] 
	W0814 09:55:28.214564    4033 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0814 09:55:28.214575    4033 out.go:239] * 
	W0814 09:55:28.215279    4033 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 09:55:28.226451    4033 out.go:177] 
	I0814 09:55:24.883606    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:55:29.885683    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:55:29.889996    4019 out.go:177] 
	W0814 09:55:29.894010    4019 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0814 09:55:29.894018    4019 out.go:239] * 
	W0814 09:55:29.894519    4019 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 09:55:29.902890    4019 out.go:177] 
	
	
	==> Docker <==
	-- Journal begins at Wed 2024-08-14 16:46:27 UTC, ends at Wed 2024-08-14 16:55:44 UTC. --
	Aug 14 16:55:29 running-upgrade-579000 dockerd[3430]: time="2024-08-14T16:55:29.138721793Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 14 16:55:29 running-upgrade-579000 dockerd[3430]: time="2024-08-14T16:55:29.138796833Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 14 16:55:29 running-upgrade-579000 dockerd[3430]: time="2024-08-14T16:55:29.138808374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 14 16:55:29 running-upgrade-579000 dockerd[3430]: time="2024-08-14T16:55:29.138880496Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/29a1b0afb3f0ccb8e8b562ad99f083982915a8a8500519988fb80ad1a6c95b7b pid=19106 runtime=io.containerd.runc.v2
	Aug 14 16:55:29 running-upgrade-579000 cri-dockerd[3269]: time="2024-08-14T16:55:29Z" level=error msg="ContainerStats resp: {0x40004e5d40 linux}"
	Aug 14 16:55:30 running-upgrade-579000 cri-dockerd[3269]: time="2024-08-14T16:55:30Z" level=error msg="ContainerStats resp: {0x40008c3680 linux}"
	Aug 14 16:55:30 running-upgrade-579000 cri-dockerd[3269]: time="2024-08-14T16:55:30Z" level=error msg="ContainerStats resp: {0x40005c9140 linux}"
	Aug 14 16:55:30 running-upgrade-579000 cri-dockerd[3269]: time="2024-08-14T16:55:30Z" level=error msg="ContainerStats resp: {0x40005c9300 linux}"
	Aug 14 16:55:30 running-upgrade-579000 cri-dockerd[3269]: time="2024-08-14T16:55:30Z" level=error msg="ContainerStats resp: {0x40005c9a00 linux}"
	Aug 14 16:55:30 running-upgrade-579000 cri-dockerd[3269]: time="2024-08-14T16:55:30Z" level=error msg="ContainerStats resp: {0x40005c9b40 linux}"
	Aug 14 16:55:30 running-upgrade-579000 cri-dockerd[3269]: time="2024-08-14T16:55:30Z" level=error msg="ContainerStats resp: {0x40005c9f80 linux}"
	Aug 14 16:55:30 running-upgrade-579000 cri-dockerd[3269]: time="2024-08-14T16:55:30Z" level=error msg="ContainerStats resp: {0x40007ec9c0 linux}"
	Aug 14 16:55:31 running-upgrade-579000 cri-dockerd[3269]: time="2024-08-14T16:55:31Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 14 16:55:36 running-upgrade-579000 cri-dockerd[3269]: time="2024-08-14T16:55:36Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 14 16:55:40 running-upgrade-579000 cri-dockerd[3269]: time="2024-08-14T16:55:40Z" level=error msg="ContainerStats resp: {0x40007f9040 linux}"
	Aug 14 16:55:40 running-upgrade-579000 cri-dockerd[3269]: time="2024-08-14T16:55:40Z" level=error msg="ContainerStats resp: {0x400058e680 linux}"
	Aug 14 16:55:41 running-upgrade-579000 cri-dockerd[3269]: time="2024-08-14T16:55:41Z" level=error msg="ContainerStats resp: {0x4000944040 linux}"
	Aug 14 16:55:41 running-upgrade-579000 cri-dockerd[3269]: time="2024-08-14T16:55:41Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 14 16:55:42 running-upgrade-579000 cri-dockerd[3269]: time="2024-08-14T16:55:42Z" level=error msg="ContainerStats resp: {0x4000944d80 linux}"
	Aug 14 16:55:42 running-upgrade-579000 cri-dockerd[3269]: time="2024-08-14T16:55:42Z" level=error msg="ContainerStats resp: {0x40007ed8c0 linux}"
	Aug 14 16:55:42 running-upgrade-579000 cri-dockerd[3269]: time="2024-08-14T16:55:42Z" level=error msg="ContainerStats resp: {0x4000945540 linux}"
	Aug 14 16:55:42 running-upgrade-579000 cri-dockerd[3269]: time="2024-08-14T16:55:42Z" level=error msg="ContainerStats resp: {0x4000358580 linux}"
	Aug 14 16:55:42 running-upgrade-579000 cri-dockerd[3269]: time="2024-08-14T16:55:42Z" level=error msg="ContainerStats resp: {0x4000945f40 linux}"
	Aug 14 16:55:42 running-upgrade-579000 cri-dockerd[3269]: time="2024-08-14T16:55:42Z" level=error msg="ContainerStats resp: {0x40004e4700 linux}"
	Aug 14 16:55:42 running-upgrade-579000 cri-dockerd[3269]: time="2024-08-14T16:55:42Z" level=error msg="ContainerStats resp: {0x40004e4d40 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	9eda890958b4a       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   8c9551df3afee
	29a1b0afb3f0c       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   c5179b226ca99
	4e6f837b0ac65       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   c5179b226ca99
	b287389b4ec96       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   8c9551df3afee
	4ffe967a65fd5       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   3f0d15f63ba7d
	cd7351187c715       ba04bb24b9575       4 minutes ago       Running             storage-provisioner       0                   d2e96322ebf8c
	2d9d0cc2c8868       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   364916d97a832
	925172ee97cbd       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   81fc688ec3181
	2f06dc386c611       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   8c2058cc76cf6
	741deb1678665       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   4ed73c4e0ac2f
	
	
	==> coredns [29a1b0afb3f0] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 324405080745819295.5121011186260364728. HINFO: read udp 10.244.0.3:37658->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 324405080745819295.5121011186260364728. HINFO: read udp 10.244.0.3:39199->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 324405080745819295.5121011186260364728. HINFO: read udp 10.244.0.3:53960->10.0.2.3:53: i/o timeout
	
	
	==> coredns [4e6f837b0ac6] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8374051983009929719.3576441430644785263. HINFO: read udp 10.244.0.3:58090->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8374051983009929719.3576441430644785263. HINFO: read udp 10.244.0.3:40055->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8374051983009929719.3576441430644785263. HINFO: read udp 10.244.0.3:56654->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8374051983009929719.3576441430644785263. HINFO: read udp 10.244.0.3:34491->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8374051983009929719.3576441430644785263. HINFO: read udp 10.244.0.3:57270->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8374051983009929719.3576441430644785263. HINFO: read udp 10.244.0.3:38650->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8374051983009929719.3576441430644785263. HINFO: read udp 10.244.0.3:36362->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8374051983009929719.3576441430644785263. HINFO: read udp 10.244.0.3:46562->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8374051983009929719.3576441430644785263. HINFO: read udp 10.244.0.3:55213->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8374051983009929719.3576441430644785263. HINFO: read udp 10.244.0.3:33072->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9eda890958b4] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 4991867086915344846.2528968083211184561. HINFO: read udp 10.244.0.2:42599->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4991867086915344846.2528968083211184561. HINFO: read udp 10.244.0.2:43530->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4991867086915344846.2528968083211184561. HINFO: read udp 10.244.0.2:37996->10.0.2.3:53: i/o timeout
	
	
	==> coredns [b287389b4ec9] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 949959839470166549.883319812214007192. HINFO: read udp 10.244.0.2:60068->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 949959839470166549.883319812214007192. HINFO: read udp 10.244.0.2:59889->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 949959839470166549.883319812214007192. HINFO: read udp 10.244.0.2:32923->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 949959839470166549.883319812214007192. HINFO: read udp 10.244.0.2:47214->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 949959839470166549.883319812214007192. HINFO: read udp 10.244.0.2:41752->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 949959839470166549.883319812214007192. HINFO: read udp 10.244.0.2:39543->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 949959839470166549.883319812214007192. HINFO: read udp 10.244.0.2:37486->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 949959839470166549.883319812214007192. HINFO: read udp 10.244.0.2:47738->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 949959839470166549.883319812214007192. HINFO: read udp 10.244.0.2:52061->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 949959839470166549.883319812214007192. HINFO: read udp 10.244.0.2:41464->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-579000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-579000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35
	                    minikube.k8s.io/name=running-upgrade-579000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_14T09_51_27_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Aug 2024 16:51:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-579000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Aug 2024 16:55:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Aug 2024 16:51:27 +0000   Wed, 14 Aug 2024 16:51:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Aug 2024 16:51:27 +0000   Wed, 14 Aug 2024 16:51:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Aug 2024 16:51:27 +0000   Wed, 14 Aug 2024 16:51:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Aug 2024 16:51:27 +0000   Wed, 14 Aug 2024 16:51:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-579000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 0536645277624343b7aef5228377f500
	  System UUID:                0536645277624343b7aef5228377f500
	  Boot ID:                    adfde626-1a6f-47fd-a162-fc90d95ae645
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-pt244                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 coredns-6d4b75cb6d-vbx96                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 etcd-running-upgrade-579000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-579000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-controller-manager-running-upgrade-579000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-proxy-89k99                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-579000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m2s   kube-proxy       
	  Normal  NodeReady                4m17s  kubelet          Node running-upgrade-579000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s  kubelet          Node running-upgrade-579000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s  kubelet          Node running-upgrade-579000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s  kubelet          Node running-upgrade-579000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m17s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m5s   node-controller  Node running-upgrade-579000 event: Registered Node running-upgrade-579000 in Controller
	
	
	==> dmesg <==
	[  +0.080835] systemd-fstab-generator[843]: Ignoring "noauto" for root device
	[  +0.082871] systemd-fstab-generator[854]: Ignoring "noauto" for root device
	[  +1.140158] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.085607] systemd-fstab-generator[1003]: Ignoring "noauto" for root device
	[  +0.075572] systemd-fstab-generator[1014]: Ignoring "noauto" for root device
	[  +3.046976] systemd-fstab-generator[1292]: Ignoring "noauto" for root device
	[  +9.118658] systemd-fstab-generator[1921]: Ignoring "noauto" for root device
	[Aug14 16:47] systemd-fstab-generator[2275]: Ignoring "noauto" for root device
	[  +0.156080] systemd-fstab-generator[2310]: Ignoring "noauto" for root device
	[  +0.107474] systemd-fstab-generator[2321]: Ignoring "noauto" for root device
	[  +0.108321] systemd-fstab-generator[2334]: Ignoring "noauto" for root device
	[  +3.561904] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.238149] systemd-fstab-generator[3226]: Ignoring "noauto" for root device
	[  +0.083669] systemd-fstab-generator[3237]: Ignoring "noauto" for root device
	[  +0.085272] systemd-fstab-generator[3248]: Ignoring "noauto" for root device
	[  +0.096455] systemd-fstab-generator[3262]: Ignoring "noauto" for root device
	[  +2.436694] systemd-fstab-generator[3414]: Ignoring "noauto" for root device
	[  +1.740918] systemd-fstab-generator[3760]: Ignoring "noauto" for root device
	[  +1.030756] systemd-fstab-generator[3904]: Ignoring "noauto" for root device
	[  +5.004644] kauditd_printk_skb: 68 callbacks suppressed
	[ +11.917594] kauditd_printk_skb: 8 callbacks suppressed
	[Aug14 16:51] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.754402] systemd-fstab-generator[12163]: Ignoring "noauto" for root device
	[  +5.127217] systemd-fstab-generator[12748]: Ignoring "noauto" for root device
	[  +0.476098] systemd-fstab-generator[12886]: Ignoring "noauto" for root device
	
	
	==> etcd [925172ee97cb] <==
	{"level":"info","ts":"2024-08-14T16:51:23.241Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-08-14T16:51:23.241Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-08-14T16:51:23.243Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-14T16:51:23.244Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-14T16:51:23.244Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-14T16:51:23.244Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-08-14T16:51:23.244Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-08-14T16:51:23.339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-14T16:51:23.339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-14T16:51:23.339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-08-14T16:51:23.339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-08-14T16:51:23.339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-08-14T16:51:23.339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-08-14T16:51:23.339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-08-14T16:51:23.340Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-579000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-14T16:51:23.340Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-14T16:51:23.341Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-14T16:51:23.340Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T16:51:23.340Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-14T16:51:23.342Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-08-14T16:51:23.341Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-14T16:51:23.342Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-14T16:51:23.348Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T16:51:23.354Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T16:51:23.354Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 16:55:44 up 9 min,  0 users,  load average: 0.19, 0.27, 0.18
	Linux running-upgrade-579000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [741deb167866] <==
	I0814 16:51:25.055806       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0814 16:51:25.055869       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0814 16:51:25.056972       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0814 16:51:25.057187       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0814 16:51:25.057230       1 cache.go:39] Caches are synced for autoregister controller
	I0814 16:51:25.095905       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0814 16:51:25.104159       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0814 16:51:25.792647       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0814 16:51:25.959637       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0814 16:51:25.960809       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0814 16:51:25.960818       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0814 16:51:26.096574       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0814 16:51:26.111313       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0814 16:51:26.222646       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0814 16:51:26.225380       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0814 16:51:26.225759       1 controller.go:611] quota admission added evaluator for: endpoints
	I0814 16:51:26.228297       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0814 16:51:27.117390       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0814 16:51:27.413456       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0814 16:51:27.419307       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0814 16:51:27.446021       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0814 16:51:27.476341       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0814 16:51:40.622801       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0814 16:51:40.877901       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0814 16:51:41.998270       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [2d9d0cc2c886] <==
	I0814 16:51:39.946649       1 shared_informer.go:262] Caches are synced for disruption
	I0814 16:51:39.946663       1 disruption.go:371] Sending events to api server.
	I0814 16:51:39.969384       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0814 16:51:39.972060       1 shared_informer.go:262] Caches are synced for namespace
	I0814 16:51:39.972107       1 shared_informer.go:262] Caches are synced for service account
	I0814 16:51:39.972188       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0814 16:51:39.972555       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0814 16:51:40.029392       1 shared_informer.go:262] Caches are synced for persistent volume
	I0814 16:51:40.033708       1 shared_informer.go:262] Caches are synced for attach detach
	I0814 16:51:40.035976       1 shared_informer.go:262] Caches are synced for PV protection
	I0814 16:51:40.037060       1 shared_informer.go:262] Caches are synced for expand
	I0814 16:51:40.048876       1 shared_informer.go:262] Caches are synced for stateful set
	I0814 16:51:40.068090       1 shared_informer.go:262] Caches are synced for daemon sets
	I0814 16:51:40.128211       1 shared_informer.go:262] Caches are synced for resource quota
	I0814 16:51:40.171617       1 shared_informer.go:262] Caches are synced for cronjob
	I0814 16:51:40.174843       1 shared_informer.go:262] Caches are synced for resource quota
	I0814 16:51:40.219620       1 shared_informer.go:262] Caches are synced for job
	I0814 16:51:40.224104       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0814 16:51:40.595287       1 shared_informer.go:262] Caches are synced for garbage collector
	I0814 16:51:40.622912       1 shared_informer.go:262] Caches are synced for garbage collector
	I0814 16:51:40.622950       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0814 16:51:40.624720       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0814 16:51:40.880998       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-89k99"
	I0814 16:51:40.978081       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-vbx96"
	I0814 16:51:40.987956       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-pt244"
	
	
	==> kube-proxy [4ffe967a65fd] <==
	I0814 16:51:41.988433       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0814 16:51:41.988457       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0814 16:51:41.988466       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0814 16:51:41.996994       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0814 16:51:41.997005       1 server_others.go:206] "Using iptables Proxier"
	I0814 16:51:41.997024       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0814 16:51:41.997115       1 server.go:661] "Version info" version="v1.24.1"
	I0814 16:51:41.997118       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 16:51:41.997421       1 config.go:317] "Starting service config controller"
	I0814 16:51:41.997428       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0814 16:51:41.997436       1 config.go:226] "Starting endpoint slice config controller"
	I0814 16:51:41.997437       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0814 16:51:41.997785       1 config.go:444] "Starting node config controller"
	I0814 16:51:41.997788       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0814 16:51:42.100096       1 shared_informer.go:262] Caches are synced for node config
	I0814 16:51:42.100105       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0814 16:51:42.100116       1 shared_informer.go:262] Caches are synced for service config
	
	
	==> kube-scheduler [2f06dc386c61] <==
	W0814 16:51:25.020198       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0814 16:51:25.020227       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0814 16:51:25.020260       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0814 16:51:25.020286       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0814 16:51:25.020326       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0814 16:51:25.020338       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0814 16:51:25.020435       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0814 16:51:25.020442       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0814 16:51:25.020462       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0814 16:51:25.020471       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0814 16:51:25.020500       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0814 16:51:25.020507       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0814 16:51:25.022019       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0814 16:51:25.022072       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0814 16:51:25.861152       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0814 16:51:25.861174       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0814 16:51:25.955484       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0814 16:51:25.955582       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0814 16:51:25.993436       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0814 16:51:25.993562       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0814 16:51:26.009173       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0814 16:51:26.009218       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0814 16:51:26.025128       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0814 16:51:26.025231       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0814 16:51:26.618145       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Wed 2024-08-14 16:46:27 UTC, ends at Wed 2024-08-14 16:55:44 UTC. --
	Aug 14 16:51:29 running-upgrade-579000 kubelet[12754]: E0814 16:51:29.256801   12754 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-579000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-579000"
	Aug 14 16:51:29 running-upgrade-579000 kubelet[12754]: E0814 16:51:29.456895   12754 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-579000\" already exists" pod="kube-system/etcd-running-upgrade-579000"
	Aug 14 16:51:29 running-upgrade-579000 kubelet[12754]: I0814 16:51:29.654314   12754 request.go:601] Waited for 1.126250772s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Aug 14 16:51:29 running-upgrade-579000 kubelet[12754]: E0814 16:51:29.657380   12754 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-579000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-579000"
	Aug 14 16:51:39 running-upgrade-579000 kubelet[12754]: I0814 16:51:39.932785   12754 topology_manager.go:200] "Topology Admit Handler"
	Aug 14 16:51:39 running-upgrade-579000 kubelet[12754]: I0814 16:51:39.975470   12754 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 14 16:51:39 running-upgrade-579000 kubelet[12754]: I0814 16:51:39.975795   12754 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 14 16:51:40 running-upgrade-579000 kubelet[12754]: I0814 16:51:40.075791   12754 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/12b2f913-2280-4ef6-953a-f60fafa27196-tmp\") pod \"storage-provisioner\" (UID: \"12b2f913-2280-4ef6-953a-f60fafa27196\") " pod="kube-system/storage-provisioner"
	Aug 14 16:51:40 running-upgrade-579000 kubelet[12754]: I0814 16:51:40.075929   12754 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbcn2\" (UniqueName: \"kubernetes.io/projected/12b2f913-2280-4ef6-953a-f60fafa27196-kube-api-access-lbcn2\") pod \"storage-provisioner\" (UID: \"12b2f913-2280-4ef6-953a-f60fafa27196\") " pod="kube-system/storage-provisioner"
	Aug 14 16:51:40 running-upgrade-579000 kubelet[12754]: E0814 16:51:40.179270   12754 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Aug 14 16:51:40 running-upgrade-579000 kubelet[12754]: E0814 16:51:40.179292   12754 projected.go:192] Error preparing data for projected volume kube-api-access-lbcn2 for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Aug 14 16:51:40 running-upgrade-579000 kubelet[12754]: E0814 16:51:40.179329   12754 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/12b2f913-2280-4ef6-953a-f60fafa27196-kube-api-access-lbcn2 podName:12b2f913-2280-4ef6-953a-f60fafa27196 nodeName:}" failed. No retries permitted until 2024-08-14 16:51:40.679316097 +0000 UTC m=+13.278543809 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lbcn2" (UniqueName: "kubernetes.io/projected/12b2f913-2280-4ef6-953a-f60fafa27196-kube-api-access-lbcn2") pod "storage-provisioner" (UID: "12b2f913-2280-4ef6-953a-f60fafa27196") : configmap "kube-root-ca.crt" not found
	Aug 14 16:51:40 running-upgrade-579000 kubelet[12754]: I0814 16:51:40.883839   12754 topology_manager.go:200] "Topology Admit Handler"
	Aug 14 16:51:40 running-upgrade-579000 kubelet[12754]: I0814 16:51:40.981695   12754 topology_manager.go:200] "Topology Admit Handler"
	Aug 14 16:51:40 running-upgrade-579000 kubelet[12754]: I0814 16:51:40.988523   12754 topology_manager.go:200] "Topology Admit Handler"
	Aug 14 16:51:41 running-upgrade-579000 kubelet[12754]: I0814 16:51:41.083442   12754 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8bfd5341-7a34-4b0b-9e01-1516c4caae23-kube-proxy\") pod \"kube-proxy-89k99\" (UID: \"8bfd5341-7a34-4b0b-9e01-1516c4caae23\") " pod="kube-system/kube-proxy-89k99"
	Aug 14 16:51:41 running-upgrade-579000 kubelet[12754]: I0814 16:51:41.083464   12754 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8bfd5341-7a34-4b0b-9e01-1516c4caae23-xtables-lock\") pod \"kube-proxy-89k99\" (UID: \"8bfd5341-7a34-4b0b-9e01-1516c4caae23\") " pod="kube-system/kube-proxy-89k99"
	Aug 14 16:51:41 running-upgrade-579000 kubelet[12754]: I0814 16:51:41.083475   12754 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8bfd5341-7a34-4b0b-9e01-1516c4caae23-lib-modules\") pod \"kube-proxy-89k99\" (UID: \"8bfd5341-7a34-4b0b-9e01-1516c4caae23\") " pod="kube-system/kube-proxy-89k99"
	Aug 14 16:51:41 running-upgrade-579000 kubelet[12754]: I0814 16:51:41.083485   12754 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50b9dd79-9ca7-485e-84da-849be1152032-config-volume\") pod \"coredns-6d4b75cb6d-vbx96\" (UID: \"50b9dd79-9ca7-485e-84da-849be1152032\") " pod="kube-system/coredns-6d4b75cb6d-vbx96"
	Aug 14 16:51:41 running-upgrade-579000 kubelet[12754]: I0814 16:51:41.083497   12754 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c82q\" (UniqueName: \"kubernetes.io/projected/50b9dd79-9ca7-485e-84da-849be1152032-kube-api-access-4c82q\") pod \"coredns-6d4b75cb6d-vbx96\" (UID: \"50b9dd79-9ca7-485e-84da-849be1152032\") " pod="kube-system/coredns-6d4b75cb6d-vbx96"
	Aug 14 16:51:41 running-upgrade-579000 kubelet[12754]: I0814 16:51:41.083508   12754 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6wkt\" (UniqueName: \"kubernetes.io/projected/8bfd5341-7a34-4b0b-9e01-1516c4caae23-kube-api-access-v6wkt\") pod \"kube-proxy-89k99\" (UID: \"8bfd5341-7a34-4b0b-9e01-1516c4caae23\") " pod="kube-system/kube-proxy-89k99"
	Aug 14 16:51:41 running-upgrade-579000 kubelet[12754]: I0814 16:51:41.183765   12754 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b237e7fd-af09-444c-b8a7-b9b747a1111a-config-volume\") pod \"coredns-6d4b75cb6d-pt244\" (UID: \"b237e7fd-af09-444c-b8a7-b9b747a1111a\") " pod="kube-system/coredns-6d4b75cb6d-pt244"
	Aug 14 16:51:41 running-upgrade-579000 kubelet[12754]: I0814 16:51:41.183802   12754 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl2wq\" (UniqueName: \"kubernetes.io/projected/b237e7fd-af09-444c-b8a7-b9b747a1111a-kube-api-access-zl2wq\") pod \"coredns-6d4b75cb6d-pt244\" (UID: \"b237e7fd-af09-444c-b8a7-b9b747a1111a\") " pod="kube-system/coredns-6d4b75cb6d-pt244"
	Aug 14 16:55:29 running-upgrade-579000 kubelet[12754]: I0814 16:55:29.746221   12754 scope.go:110] "RemoveContainer" containerID="178c997768be7de8cf4f56087a7f7a4963ea6a4831e689fa0fe2300645c4eada"
	Aug 14 16:55:29 running-upgrade-579000 kubelet[12754]: I0814 16:55:29.761412   12754 scope.go:110] "RemoveContainer" containerID="ba7f625babd1d45b5bf8247d22c3b39bc0d76faf34bc7d757d7125100b418978"
	
	
	==> storage-provisioner [cd7351187c71] <==
	I0814 16:51:41.042338       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0814 16:51:41.045862       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0814 16:51:41.045911       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0814 16:51:41.048748       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0814 16:51:41.048876       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-579000_459c8145-cd3e-4c78-9565-d30dd54e90c1!
	I0814 16:51:41.049798       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7dcd74eb-30d8-4a68-9569-e1823d5215b8", APIVersion:"v1", ResourceVersion:"355", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-579000_459c8145-cd3e-4c78-9565-d30dd54e90c1 became leader
	I0814 16:51:41.149639       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-579000_459c8145-cd3e-4c78-9565-d30dd54e90c1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-579000 -n running-upgrade-579000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-579000 -n running-upgrade-579000: exit status 2 (15.629931208s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-579000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-579000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-579000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-arm64 delete -p running-upgrade-579000: (1.079739666s)
--- FAIL: TestRunningBinaryUpgrade (610.15s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.76s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-652000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-652000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.920333708s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-652000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19446
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-652000" primary control-plane node in "kubernetes-upgrade-652000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-652000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:45:32.789652    3926 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:45:32.789880    3926 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:45:32.789889    3926 out.go:304] Setting ErrFile to fd 2...
	I0814 09:45:32.789893    3926 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:45:32.790174    3926 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:45:32.791454    3926 out.go:298] Setting JSON to false
	I0814 09:45:32.807859    3926 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2689,"bootTime":1723651243,"procs":499,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0814 09:45:32.807975    3926 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0814 09:45:32.812841    3926 out.go:177] * [kubernetes-upgrade-652000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0814 09:45:32.819813    3926 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 09:45:32.819883    3926 notify.go:220] Checking for updates...
	I0814 09:45:32.826721    3926 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	I0814 09:45:32.829776    3926 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0814 09:45:32.832782    3926 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 09:45:32.835733    3926 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	I0814 09:45:32.838755    3926 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 09:45:32.842177    3926 config.go:182] Loaded profile config "multinode-157000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:45:32.842243    3926 config.go:182] Loaded profile config "offline-docker-556000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:45:32.842287    3926 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 09:45:32.846743    3926 out.go:177] * Using the qemu2 driver based on user configuration
	I0814 09:45:32.853802    3926 start.go:297] selected driver: qemu2
	I0814 09:45:32.853810    3926 start.go:901] validating driver "qemu2" against <nil>
	I0814 09:45:32.853822    3926 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 09:45:32.856126    3926 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0814 09:45:32.858823    3926 out.go:177] * Automatically selected the socket_vmnet network
	I0814 09:45:32.861875    3926 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0814 09:45:32.861930    3926 cni.go:84] Creating CNI manager for ""
	I0814 09:45:32.861940    3926 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0814 09:45:32.861968    3926 start.go:340] cluster config:
	{Name:kubernetes-upgrade-652000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-652000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 09:45:32.865773    3926 iso.go:125] acquiring lock: {Name:mkda50f549908e1040ce4e19ea38376a2f640f3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:45:32.873776    3926 out.go:177] * Starting "kubernetes-upgrade-652000" primary control-plane node in "kubernetes-upgrade-652000" cluster
	I0814 09:45:32.877714    3926 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0814 09:45:32.877729    3926 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0814 09:45:32.877737    3926 cache.go:56] Caching tarball of preloaded images
	I0814 09:45:32.877788    3926 preload.go:172] Found /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0814 09:45:32.877794    3926 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0814 09:45:32.877846    3926 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/kubernetes-upgrade-652000/config.json ...
	I0814 09:45:32.877857    3926 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/kubernetes-upgrade-652000/config.json: {Name:mkbac2dd654a49745593071661272d2c84eb55a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:45:32.878064    3926 start.go:360] acquireMachinesLock for kubernetes-upgrade-652000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:45:32.884251    3926 start.go:364] duration metric: took 6.177584ms to acquireMachinesLock for "kubernetes-upgrade-652000"
	I0814 09:45:32.884279    3926 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-652000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-652000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0814 09:45:32.884333    3926 start.go:125] createHost starting for "" (driver="qemu2")
	I0814 09:45:32.892781    3926 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0814 09:45:32.910433    3926 start.go:159] libmachine.API.Create for "kubernetes-upgrade-652000" (driver="qemu2")
	I0814 09:45:32.910461    3926 client.go:168] LocalClient.Create starting
	I0814 09:45:32.910523    3926 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem
	I0814 09:45:32.910555    3926 main.go:141] libmachine: Decoding PEM data...
	I0814 09:45:32.910565    3926 main.go:141] libmachine: Parsing certificate...
	I0814 09:45:32.910602    3926 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/cert.pem
	I0814 09:45:32.910625    3926 main.go:141] libmachine: Decoding PEM data...
	I0814 09:45:32.910634    3926 main.go:141] libmachine: Parsing certificate...
	I0814 09:45:32.910979    3926 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19446-1067/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0814 09:45:33.080711    3926 main.go:141] libmachine: Creating SSH key...
	I0814 09:45:33.168357    3926 main.go:141] libmachine: Creating Disk image...
	I0814 09:45:33.168364    3926 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0814 09:45:33.168541    3926 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kubernetes-upgrade-652000/disk.qcow2.raw /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kubernetes-upgrade-652000/disk.qcow2
	I0814 09:45:33.177960    3926 main.go:141] libmachine: STDOUT: 
	I0814 09:45:33.177978    3926 main.go:141] libmachine: STDERR: 
	I0814 09:45:33.178023    3926 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kubernetes-upgrade-652000/disk.qcow2 +20000M
	I0814 09:45:33.186011    3926 main.go:141] libmachine: STDOUT: Image resized.
	
	I0814 09:45:33.186031    3926 main.go:141] libmachine: STDERR: 
	I0814 09:45:33.186045    3926 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kubernetes-upgrade-652000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kubernetes-upgrade-652000/disk.qcow2
	I0814 09:45:33.186050    3926 main.go:141] libmachine: Starting QEMU VM...
	I0814 09:45:33.186060    3926 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:45:33.186095    3926 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kubernetes-upgrade-652000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kubernetes-upgrade-652000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kubernetes-upgrade-652000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:7c:1f:94:d6:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kubernetes-upgrade-652000/disk.qcow2
	I0814 09:45:33.187686    3926 main.go:141] libmachine: STDOUT: 
	I0814 09:45:33.187703    3926 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:45:33.187723    3926 client.go:171] duration metric: took 277.265792ms to LocalClient.Create
	I0814 09:45:35.189833    3926 start.go:128] duration metric: took 2.305559041s to createHost
	I0814 09:45:35.189909    3926 start.go:83] releasing machines lock for "kubernetes-upgrade-652000", held for 2.305713541s
	W0814 09:45:35.189976    3926 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:45:35.205150    3926 out.go:177] * Deleting "kubernetes-upgrade-652000" in qemu2 ...
	W0814 09:45:35.243418    3926 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:45:35.243448    3926 start.go:729] Will try again in 5 seconds ...
	I0814 09:45:40.245557    3926 start.go:360] acquireMachinesLock for kubernetes-upgrade-652000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:45:40.320001    3926 start.go:364] duration metric: took 74.339042ms to acquireMachinesLock for "kubernetes-upgrade-652000"
	I0814 09:45:40.320187    3926 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-652000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-652000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0814 09:45:40.320412    3926 start.go:125] createHost starting for "" (driver="qemu2")
	I0814 09:45:40.335940    3926 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0814 09:45:40.387238    3926 start.go:159] libmachine.API.Create for "kubernetes-upgrade-652000" (driver="qemu2")
	I0814 09:45:40.387286    3926 client.go:168] LocalClient.Create starting
	I0814 09:45:40.387407    3926 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem
	I0814 09:45:40.387461    3926 main.go:141] libmachine: Decoding PEM data...
	I0814 09:45:40.387475    3926 main.go:141] libmachine: Parsing certificate...
	I0814 09:45:40.387530    3926 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/cert.pem
	I0814 09:45:40.387570    3926 main.go:141] libmachine: Decoding PEM data...
	I0814 09:45:40.387583    3926 main.go:141] libmachine: Parsing certificate...
	I0814 09:45:40.388101    3926 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19446-1067/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0814 09:45:40.589506    3926 main.go:141] libmachine: Creating SSH key...
	I0814 09:45:40.622759    3926 main.go:141] libmachine: Creating Disk image...
	I0814 09:45:40.622764    3926 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0814 09:45:40.622919    3926 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kubernetes-upgrade-652000/disk.qcow2.raw /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kubernetes-upgrade-652000/disk.qcow2
	I0814 09:45:40.632815    3926 main.go:141] libmachine: STDOUT: 
	I0814 09:45:40.632839    3926 main.go:141] libmachine: STDERR: 
	I0814 09:45:40.632908    3926 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kubernetes-upgrade-652000/disk.qcow2 +20000M
	I0814 09:45:40.641416    3926 main.go:141] libmachine: STDOUT: Image resized.
	
	I0814 09:45:40.641431    3926 main.go:141] libmachine: STDERR: 
	I0814 09:45:40.641452    3926 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kubernetes-upgrade-652000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kubernetes-upgrade-652000/disk.qcow2
	I0814 09:45:40.641457    3926 main.go:141] libmachine: Starting QEMU VM...
	I0814 09:45:40.641475    3926 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:45:40.641505    3926 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kubernetes-upgrade-652000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kubernetes-upgrade-652000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kubernetes-upgrade-652000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:9e:5c:3e:f7:75 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kubernetes-upgrade-652000/disk.qcow2
	I0814 09:45:40.643137    3926 main.go:141] libmachine: STDOUT: 
	I0814 09:45:40.643154    3926 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:45:40.643169    3926 client.go:171] duration metric: took 255.859833ms to LocalClient.Create
	I0814 09:45:42.645240    3926 start.go:128] duration metric: took 2.3248805s to createHost
	I0814 09:45:42.645287    3926 start.go:83] releasing machines lock for "kubernetes-upgrade-652000", held for 2.3253435s
	W0814 09:45:42.645507    3926 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-652000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-652000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:45:42.654895    3926 out.go:177] 
	W0814 09:45:42.658059    3926 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0814 09:45:42.658205    3926 out.go:239] * 
	* 
	W0814 09:45:42.660771    3926 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 09:45:42.670930    3926 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-652000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-652000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-652000: (3.375586083s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-652000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-652000 status --format={{.Host}}: exit status 7 (59.409959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-652000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-652000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.224664s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-652000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19446
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-652000" primary control-plane node in "kubernetes-upgrade-652000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-652000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-652000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:45:46.149721    3976 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:45:46.150042    3976 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:45:46.150074    3976 out.go:304] Setting ErrFile to fd 2...
	I0814 09:45:46.150078    3976 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:45:46.150254    3976 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:45:46.151675    3976 out.go:298] Setting JSON to false
	I0814 09:45:46.168642    3976 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2703,"bootTime":1723651243,"procs":499,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0814 09:45:46.168714    3976 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0814 09:45:46.184125    3976 out.go:177] * [kubernetes-upgrade-652000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0814 09:45:46.189093    3976 notify.go:220] Checking for updates...
	I0814 09:45:46.194132    3976 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 09:45:46.202070    3976 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	I0814 09:45:46.209068    3976 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0814 09:45:46.216097    3976 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 09:45:46.224256    3976 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	I0814 09:45:46.232068    3976 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 09:45:46.236202    3976 config.go:182] Loaded profile config "kubernetes-upgrade-652000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0814 09:45:46.236475    3976 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 09:45:46.241038    3976 out.go:177] * Using the qemu2 driver based on existing profile
	I0814 09:45:46.249011    3976 start.go:297] selected driver: qemu2
	I0814 09:45:46.249017    3976 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-652000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-652000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 09:45:46.249065    3976 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 09:45:46.251567    3976 cni.go:84] Creating CNI manager for ""
	I0814 09:45:46.251584    3976 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0814 09:45:46.251611    3976 start.go:340] cluster config:
	{Name:kubernetes-upgrade-652000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-652000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 09:45:46.255305    3976 iso.go:125] acquiring lock: {Name:mkda50f549908e1040ce4e19ea38376a2f640f3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:45:46.260074    3976 out.go:177] * Starting "kubernetes-upgrade-652000" primary control-plane node in "kubernetes-upgrade-652000" cluster
	I0814 09:45:46.268027    3976 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0814 09:45:46.268042    3976 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0814 09:45:46.268052    3976 cache.go:56] Caching tarball of preloaded images
	I0814 09:45:46.268107    3976 preload.go:172] Found /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0814 09:45:46.268112    3976 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0814 09:45:46.268166    3976 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/kubernetes-upgrade-652000/config.json ...
	I0814 09:45:46.268467    3976 start.go:360] acquireMachinesLock for kubernetes-upgrade-652000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:45:46.268502    3976 start.go:364] duration metric: took 28.334µs to acquireMachinesLock for "kubernetes-upgrade-652000"
	I0814 09:45:46.268512    3976 start.go:96] Skipping create...Using existing machine configuration
	I0814 09:45:46.268520    3976 fix.go:54] fixHost starting: 
	I0814 09:45:46.268638    3976 fix.go:112] recreateIfNeeded on kubernetes-upgrade-652000: state=Stopped err=<nil>
	W0814 09:45:46.268646    3976 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 09:45:46.277052    3976 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-652000" ...
	I0814 09:45:46.280032    3976 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:45:46.280076    3976 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kubernetes-upgrade-652000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kubernetes-upgrade-652000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kubernetes-upgrade-652000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:9e:5c:3e:f7:75 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kubernetes-upgrade-652000/disk.qcow2
	I0814 09:45:46.282137    3976 main.go:141] libmachine: STDOUT: 
	I0814 09:45:46.282153    3976 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:45:46.282194    3976 fix.go:56] duration metric: took 13.676209ms for fixHost
	I0814 09:45:46.282199    3976 start.go:83] releasing machines lock for "kubernetes-upgrade-652000", held for 13.693541ms
	W0814 09:45:46.282205    3976 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0814 09:45:46.282254    3976 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:45:46.282259    3976 start.go:729] Will try again in 5 seconds ...
	I0814 09:45:51.284310    3976 start.go:360] acquireMachinesLock for kubernetes-upgrade-652000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:45:51.284763    3976 start.go:364] duration metric: took 316.625µs to acquireMachinesLock for "kubernetes-upgrade-652000"
	I0814 09:45:51.284917    3976 start.go:96] Skipping create...Using existing machine configuration
	I0814 09:45:51.284939    3976 fix.go:54] fixHost starting: 
	I0814 09:45:51.285657    3976 fix.go:112] recreateIfNeeded on kubernetes-upgrade-652000: state=Stopped err=<nil>
	W0814 09:45:51.285687    3976 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 09:45:51.295202    3976 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-652000" ...
	I0814 09:45:51.299157    3976 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:45:51.299452    3976 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kubernetes-upgrade-652000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kubernetes-upgrade-652000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kubernetes-upgrade-652000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:9e:5c:3e:f7:75 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kubernetes-upgrade-652000/disk.qcow2
	I0814 09:45:51.309702    3976 main.go:141] libmachine: STDOUT: 
	I0814 09:45:51.309789    3976 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:45:51.309891    3976 fix.go:56] duration metric: took 24.953875ms for fixHost
	I0814 09:45:51.309919    3976 start.go:83] releasing machines lock for "kubernetes-upgrade-652000", held for 25.131375ms
	W0814 09:45:51.310204    3976 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-652000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-652000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:45:51.317145    3976 out.go:177] 
	W0814 09:45:51.321052    3976 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0814 09:45:51.321113    3976 out.go:239] * 
	* 
	W0814 09:45:51.323736    3976 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 09:45:51.333147    3976 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-652000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-652000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-652000 version --output=json: exit status 1 (60.4885ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-652000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-08-14 09:45:51.406233 -0700 PDT m=+2205.561672334
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-652000 -n kubernetes-upgrade-652000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-652000 -n kubernetes-upgrade-652000: exit status 7 (34.512875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-652000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-652000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-652000
--- FAIL: TestKubernetesUpgrade (18.76s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (587.68s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.53578961 start -p stopped-upgrade-996000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.53578961 start -p stopped-upgrade-996000 --memory=2200 --vm-driver=qemu2 : (50.023571417s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.53578961 -p stopped-upgrade-996000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.53578961 -p stopped-upgrade-996000 stop: (12.09415275s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-996000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-996000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m45.494289667s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-996000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19446
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-996000" primary control-plane node in "stopped-upgrade-996000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-996000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:46:44.476804    4019 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:46:44.476930    4019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:46:44.476933    4019 out.go:304] Setting ErrFile to fd 2...
	I0814 09:46:44.476936    4019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:46:44.477081    4019 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:46:44.478133    4019 out.go:298] Setting JSON to false
	I0814 09:46:44.495152    4019 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2761,"bootTime":1723651243,"procs":496,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0814 09:46:44.495220    4019 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0814 09:46:44.499587    4019 out.go:177] * [stopped-upgrade-996000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0814 09:46:44.507762    4019 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 09:46:44.507803    4019 notify.go:220] Checking for updates...
	I0814 09:46:44.515718    4019 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	I0814 09:46:44.518730    4019 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0814 09:46:44.521733    4019 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 09:46:44.524679    4019 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	I0814 09:46:44.527697    4019 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 09:46:44.530954    4019 config.go:182] Loaded profile config "stopped-upgrade-996000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0814 09:46:44.534637    4019 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0814 09:46:44.537708    4019 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 09:46:44.541681    4019 out.go:177] * Using the qemu2 driver based on existing profile
	I0814 09:46:44.548721    4019 start.go:297] selected driver: qemu2
	I0814 09:46:44.548726    4019 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-996000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50269 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-996000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0814 09:46:44.548774    4019 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 09:46:44.551109    4019 cni.go:84] Creating CNI manager for ""
	I0814 09:46:44.551125    4019 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0814 09:46:44.551150    4019 start.go:340] cluster config:
	{Name:stopped-upgrade-996000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50269 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-996000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0814 09:46:44.551197    4019 iso.go:125] acquiring lock: {Name:mkda50f549908e1040ce4e19ea38376a2f640f3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:46:44.559686    4019 out.go:177] * Starting "stopped-upgrade-996000" primary control-plane node in "stopped-upgrade-996000" cluster
	I0814 09:46:44.562652    4019 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0814 09:46:44.562667    4019 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0814 09:46:44.562675    4019 cache.go:56] Caching tarball of preloaded images
	I0814 09:46:44.562727    4019 preload.go:172] Found /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0814 09:46:44.562733    4019 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0814 09:46:44.562790    4019 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/stopped-upgrade-996000/config.json ...
	I0814 09:46:44.563120    4019 start.go:360] acquireMachinesLock for stopped-upgrade-996000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:46:44.563148    4019 start.go:364] duration metric: took 21.584µs to acquireMachinesLock for "stopped-upgrade-996000"
	I0814 09:46:44.563157    4019 start.go:96] Skipping create...Using existing machine configuration
	I0814 09:46:44.563162    4019 fix.go:54] fixHost starting: 
	I0814 09:46:44.563267    4019 fix.go:112] recreateIfNeeded on stopped-upgrade-996000: state=Stopped err=<nil>
	W0814 09:46:44.563274    4019 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 09:46:44.570580    4019 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-996000" ...
	I0814 09:46:44.574670    4019 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:46:44.574732    4019 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/stopped-upgrade-996000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/stopped-upgrade-996000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/stopped-upgrade-996000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50234-:22,hostfwd=tcp::50235-:2376,hostname=stopped-upgrade-996000 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/stopped-upgrade-996000/disk.qcow2
	I0814 09:46:44.611529    4019 main.go:141] libmachine: STDOUT: 
	I0814 09:46:44.611551    4019 main.go:141] libmachine: STDERR: 
	I0814 09:46:44.611557    4019 main.go:141] libmachine: Waiting for VM to start (ssh -p 50234 docker@127.0.0.1)...
	I0814 09:47:04.658948    4019 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/stopped-upgrade-996000/config.json ...
	I0814 09:47:04.659165    4019 machine.go:94] provisionDockerMachine start ...
	I0814 09:47:04.659208    4019 main.go:141] libmachine: Using SSH client type: native
	I0814 09:47:04.659349    4019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1013505a0] 0x101352e00 <nil>  [] 0s} localhost 50234 <nil> <nil>}
	I0814 09:47:04.659354    4019 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 09:47:04.722965    4019 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0814 09:47:04.722980    4019 buildroot.go:166] provisioning hostname "stopped-upgrade-996000"
	I0814 09:47:04.723027    4019 main.go:141] libmachine: Using SSH client type: native
	I0814 09:47:04.723137    4019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1013505a0] 0x101352e00 <nil>  [] 0s} localhost 50234 <nil> <nil>}
	I0814 09:47:04.723143    4019 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-996000 && echo "stopped-upgrade-996000" | sudo tee /etc/hostname
	I0814 09:47:04.790976    4019 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-996000
	
	I0814 09:47:04.791033    4019 main.go:141] libmachine: Using SSH client type: native
	I0814 09:47:04.791161    4019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1013505a0] 0x101352e00 <nil>  [] 0s} localhost 50234 <nil> <nil>}
	I0814 09:47:04.791170    4019 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-996000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-996000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-996000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 09:47:04.857575    4019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 09:47:04.857587    4019 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19446-1067/.minikube CaCertPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19446-1067/.minikube}
	I0814 09:47:04.857599    4019 buildroot.go:174] setting up certificates
	I0814 09:47:04.857604    4019 provision.go:84] configureAuth start
	I0814 09:47:04.857608    4019 provision.go:143] copyHostCerts
	I0814 09:47:04.857695    4019 exec_runner.go:144] found /Users/jenkins/minikube-integration/19446-1067/.minikube/cert.pem, removing ...
	I0814 09:47:04.857702    4019 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19446-1067/.minikube/cert.pem
	I0814 09:47:04.857796    4019 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19446-1067/.minikube/cert.pem (1123 bytes)
	I0814 09:47:04.857973    4019 exec_runner.go:144] found /Users/jenkins/minikube-integration/19446-1067/.minikube/key.pem, removing ...
	I0814 09:47:04.857978    4019 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19446-1067/.minikube/key.pem
	I0814 09:47:04.858022    4019 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19446-1067/.minikube/key.pem (1675 bytes)
	I0814 09:47:04.858119    4019 exec_runner.go:144] found /Users/jenkins/minikube-integration/19446-1067/.minikube/ca.pem, removing ...
	I0814 09:47:04.858123    4019 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19446-1067/.minikube/ca.pem
	I0814 09:47:04.858160    4019 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19446-1067/.minikube/ca.pem (1082 bytes)
	I0814 09:47:04.858250    4019 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-996000 san=[127.0.0.1 localhost minikube stopped-upgrade-996000]
	I0814 09:47:04.928399    4019 provision.go:177] copyRemoteCerts
	I0814 09:47:04.928435    4019 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 09:47:04.928444    4019 sshutil.go:53] new ssh client: &{IP:localhost Port:50234 SSHKeyPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/stopped-upgrade-996000/id_rsa Username:docker}
	I0814 09:47:04.962728    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0814 09:47:04.969989    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0814 09:47:04.977192    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0814 09:47:04.984072    4019 provision.go:87] duration metric: took 126.473791ms to configureAuth
	I0814 09:47:04.984083    4019 buildroot.go:189] setting minikube options for container-runtime
	I0814 09:47:04.984185    4019 config.go:182] Loaded profile config "stopped-upgrade-996000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0814 09:47:04.984228    4019 main.go:141] libmachine: Using SSH client type: native
	I0814 09:47:04.984313    4019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1013505a0] 0x101352e00 <nil>  [] 0s} localhost 50234 <nil> <nil>}
	I0814 09:47:04.984317    4019 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0814 09:47:05.051946    4019 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0814 09:47:05.051960    4019 buildroot.go:70] root file system type: tmpfs
	I0814 09:47:05.052028    4019 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0814 09:47:05.052094    4019 main.go:141] libmachine: Using SSH client type: native
	I0814 09:47:05.052225    4019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1013505a0] 0x101352e00 <nil>  [] 0s} localhost 50234 <nil> <nil>}
	I0814 09:47:05.052258    4019 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0814 09:47:05.119257    4019 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0814 09:47:05.119305    4019 main.go:141] libmachine: Using SSH client type: native
	I0814 09:47:05.119420    4019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1013505a0] 0x101352e00 <nil>  [] 0s} localhost 50234 <nil> <nil>}
	I0814 09:47:05.119429    4019 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0814 09:47:05.450423    4019 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0814 09:47:05.450435    4019 machine.go:97] duration metric: took 791.351333ms to provisionDockerMachine
	I0814 09:47:05.450441    4019 start.go:293] postStartSetup for "stopped-upgrade-996000" (driver="qemu2")
	I0814 09:47:05.450448    4019 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 09:47:05.450512    4019 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 09:47:05.450521    4019 sshutil.go:53] new ssh client: &{IP:localhost Port:50234 SSHKeyPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/stopped-upgrade-996000/id_rsa Username:docker}
	I0814 09:47:05.485426    4019 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 09:47:05.486827    4019 info.go:137] Remote host: Buildroot 2021.02.12
	I0814 09:47:05.486837    4019 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19446-1067/.minikube/addons for local assets ...
	I0814 09:47:05.486933    4019 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19446-1067/.minikube/files for local assets ...
	I0814 09:47:05.487023    4019 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19446-1067/.minikube/files/etc/ssl/certs/16002.pem -> 16002.pem in /etc/ssl/certs
	I0814 09:47:05.487120    4019 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 09:47:05.490187    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/files/etc/ssl/certs/16002.pem --> /etc/ssl/certs/16002.pem (1708 bytes)
	I0814 09:47:05.497498    4019 start.go:296] duration metric: took 47.056541ms for postStartSetup
	I0814 09:47:05.497511    4019 fix.go:56] duration metric: took 20.938155209s for fixHost
	I0814 09:47:05.497546    4019 main.go:141] libmachine: Using SSH client type: native
	I0814 09:47:05.497660    4019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1013505a0] 0x101352e00 <nil>  [] 0s} localhost 50234 <nil> <nil>}
	I0814 09:47:05.497664    4019 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0814 09:47:05.562873    4019 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723654025.168279587
	
	I0814 09:47:05.562881    4019 fix.go:216] guest clock: 1723654025.168279587
	I0814 09:47:05.562886    4019 fix.go:229] Guest: 2024-08-14 09:47:05.168279587 -0700 PDT Remote: 2024-08-14 09:47:05.497513 -0700 PDT m=+21.044678292 (delta=-329.233413ms)
	I0814 09:47:05.562903    4019 fix.go:200] guest clock delta is within tolerance: -329.233413ms
	I0814 09:47:05.562905    4019 start.go:83] releasing machines lock for "stopped-upgrade-996000", held for 21.003565916s
	I0814 09:47:05.562976    4019 ssh_runner.go:195] Run: cat /version.json
	I0814 09:47:05.562984    4019 sshutil.go:53] new ssh client: &{IP:localhost Port:50234 SSHKeyPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/stopped-upgrade-996000/id_rsa Username:docker}
	I0814 09:47:05.562990    4019 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 09:47:05.563006    4019 sshutil.go:53] new ssh client: &{IP:localhost Port:50234 SSHKeyPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/stopped-upgrade-996000/id_rsa Username:docker}
	W0814 09:47:05.563696    4019 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:50423->127.0.0.1:50234: write: broken pipe
	I0814 09:47:05.563715    4019 retry.go:31] will retry after 136.235677ms: ssh: handshake failed: write tcp 127.0.0.1:50423->127.0.0.1:50234: write: broken pipe
	W0814 09:47:05.595412    4019 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0814 09:47:05.595474    4019 ssh_runner.go:195] Run: systemctl --version
	I0814 09:47:05.597153    4019 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 09:47:05.598904    4019 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 09:47:05.598935    4019 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0814 09:47:05.601701    4019 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0814 09:47:05.605976    4019 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 09:47:05.605986    4019 start.go:495] detecting cgroup driver to use...
	I0814 09:47:05.606054    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 09:47:05.612907    4019 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0814 09:47:05.615891    4019 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0814 09:47:05.618628    4019 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0814 09:47:05.618654    4019 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0814 09:47:05.622274    4019 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0814 09:47:05.625251    4019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0814 09:47:05.628434    4019 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0814 09:47:05.631311    4019 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 09:47:05.634574    4019 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0814 09:47:05.638248    4019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0814 09:47:05.642003    4019 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0814 09:47:05.645870    4019 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 09:47:05.648998    4019 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 09:47:05.651550    4019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 09:47:05.734158    4019 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0814 09:47:05.741556    4019 start.go:495] detecting cgroup driver to use...
	I0814 09:47:05.741636    4019 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0814 09:47:05.748228    4019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 09:47:05.754661    4019 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 09:47:05.765392    4019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 09:47:05.808142    4019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0814 09:47:05.812845    4019 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0814 09:47:05.853960    4019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0814 09:47:05.859506    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 09:47:05.865377    4019 ssh_runner.go:195] Run: which cri-dockerd
	I0814 09:47:05.866914    4019 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0814 09:47:05.870001    4019 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0814 09:47:05.875699    4019 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0814 09:47:05.935593    4019 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0814 09:47:06.005102    4019 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0814 09:47:06.005177    4019 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0814 09:47:06.011039    4019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 09:47:06.098560    4019 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0814 09:47:07.225386    4019 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.126918041s)
	I0814 09:47:07.225516    4019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0814 09:47:07.230841    4019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0814 09:47:07.236448    4019 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0814 09:47:07.318098    4019 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0814 09:47:07.403116    4019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 09:47:07.487797    4019 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0814 09:47:07.494838    4019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0814 09:47:07.500099    4019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 09:47:07.573732    4019 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0814 09:47:07.617793    4019 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0814 09:47:07.617951    4019 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0814 09:47:07.620661    4019 start.go:563] Will wait 60s for crictl version
	I0814 09:47:07.620705    4019 ssh_runner.go:195] Run: which crictl
	I0814 09:47:07.622343    4019 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 09:47:07.637237    4019 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0814 09:47:07.637297    4019 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0814 09:47:07.653421    4019 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0814 09:47:07.675913    4019 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0814 09:47:07.675978    4019 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0814 09:47:07.677322    4019 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 09:47:07.681334    4019 kubeadm.go:883] updating cluster {Name:stopped-upgrade-996000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50269 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-996000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0814 09:47:07.681390    4019 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0814 09:47:07.681433    4019 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0814 09:47:07.692379    4019 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0814 09:47:07.692388    4019 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0814 09:47:07.692435    4019 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0814 09:47:07.695526    4019 ssh_runner.go:195] Run: which lz4
	I0814 09:47:07.696760    4019 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0814 09:47:07.697990    4019 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 09:47:07.698002    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0814 09:47:08.644019    4019 docker.go:649] duration metric: took 947.378666ms to copy over tarball
	I0814 09:47:08.644083    4019 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 09:47:09.800990    4019 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.157003291s)
	I0814 09:47:09.801003    4019 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 09:47:09.816708    4019 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0814 09:47:09.819691    4019 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0814 09:47:09.824798    4019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 09:47:09.903171    4019 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0814 09:47:11.428095    4019 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.525043458s)
	I0814 09:47:11.428185    4019 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0814 09:47:11.444011    4019 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0814 09:47:11.444020    4019 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0814 09:47:11.444025    4019 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0814 09:47:11.448065    4019 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 09:47:11.449920    4019 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0814 09:47:11.452136    4019 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 09:47:11.452219    4019 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0814 09:47:11.454772    4019 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0814 09:47:11.454799    4019 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0814 09:47:11.456497    4019 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0814 09:47:11.456627    4019 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0814 09:47:11.458512    4019 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0814 09:47:11.459321    4019 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0814 09:47:11.459866    4019 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0814 09:47:11.459897    4019 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0814 09:47:11.463664    4019 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0814 09:47:11.463672    4019 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0814 09:47:11.463676    4019 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0814 09:47:11.465743    4019 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0814 09:47:11.917773    4019 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0814 09:47:11.929413    4019 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0814 09:47:11.929442    4019 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0814 09:47:11.929516    4019 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0814 09:47:11.936769    4019 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0814 09:47:11.943888    4019 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0814 09:47:11.951749    4019 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0814 09:47:11.951770    4019 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0814 09:47:11.951889    4019 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0814 09:47:11.964195    4019 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0814 09:47:11.964544    4019 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0814 09:47:11.976593    4019 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0814 09:47:11.976618    4019 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0814 09:47:11.976676    4019 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0814 09:47:11.981071    4019 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0814 09:47:11.990611    4019 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0814 09:47:11.990735    4019 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0814 09:47:11.993932    4019 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0814 09:47:11.993956    4019 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0814 09:47:11.993963    4019 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0814 09:47:11.993998    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0814 09:47:11.994016    4019 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0814 09:47:12.002104    4019 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0814 09:47:12.002127    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0814 09:47:12.003769    4019 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	W0814 09:47:12.013676    4019 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0814 09:47:12.013798    4019 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0814 09:47:12.019790    4019 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0814 09:47:12.049134    4019 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0814 09:47:12.049186    4019 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0814 09:47:12.049201    4019 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0814 09:47:12.049272    4019 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0814 09:47:12.049305    4019 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0814 09:47:12.049312    4019 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0814 09:47:12.049332    4019 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0814 09:47:12.058516    4019 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0814 09:47:12.074115    4019 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0814 09:47:12.074288    4019 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0814 09:47:12.074389    4019 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0814 09:47:12.082150    4019 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0814 09:47:12.082148    4019 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0814 09:47:12.082183    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0814 09:47:12.082186    4019 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0814 09:47:12.082238    4019 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	W0814 09:47:12.091303    4019 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0814 09:47:12.091418    4019 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 09:47:12.104265    4019 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0814 09:47:12.104407    4019 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0814 09:47:12.126053    4019 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0814 09:47:12.126082    4019 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 09:47:12.126081    4019 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0814 09:47:12.126109    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0814 09:47:12.126135    4019 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 09:47:12.161734    4019 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0814 09:47:12.161750    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0814 09:47:12.187022    4019 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0814 09:47:12.187158    4019 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0814 09:47:12.264746    4019 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0814 09:47:12.264799    4019 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0814 09:47:12.264825    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0814 09:47:12.337275    4019 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0814 09:47:12.337302    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0814 09:47:12.722425    4019 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0814 09:47:12.722492    4019 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0814 09:47:12.722525    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0814 09:47:12.875862    4019 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0814 09:47:12.875909    4019 cache_images.go:92] duration metric: took 1.432000333s to LoadCachedImages
	W0814 09:47:12.875958    4019 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0814 09:47:12.875963    4019 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0814 09:47:12.876024    4019 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-996000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-996000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 09:47:12.876125    4019 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0814 09:47:12.890457    4019 cni.go:84] Creating CNI manager for ""
	I0814 09:47:12.890473    4019 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0814 09:47:12.890478    4019 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 09:47:12.890486    4019 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-996000 NodeName:stopped-upgrade-996000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 09:47:12.890545    4019 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-996000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 09:47:12.890607    4019 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0814 09:47:12.893926    4019 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 09:47:12.893983    4019 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 09:47:12.897648    4019 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0814 09:47:12.903435    4019 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 09:47:12.909254    4019 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0814 09:47:12.915225    4019 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0814 09:47:12.916637    4019 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 09:47:12.921115    4019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 09:47:12.989293    4019 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 09:47:12.995784    4019 certs.go:68] Setting up /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/stopped-upgrade-996000 for IP: 10.0.2.15
	I0814 09:47:12.995796    4019 certs.go:194] generating shared ca certs ...
	I0814 09:47:12.995805    4019 certs.go:226] acquiring lock for ca certs: {Name:mk41737d7568a132ec38012a87fa9d3345f331c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:47:12.995985    4019 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19446-1067/.minikube/ca.key
	I0814 09:47:12.996035    4019 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19446-1067/.minikube/proxy-client-ca.key
	I0814 09:47:12.996041    4019 certs.go:256] generating profile certs ...
	I0814 09:47:12.996113    4019 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/stopped-upgrade-996000/client.key
	I0814 09:47:12.996131    4019 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/stopped-upgrade-996000/apiserver.key.1b5cac53
	I0814 09:47:12.996144    4019 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/stopped-upgrade-996000/apiserver.crt.1b5cac53 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0814 09:47:13.283174    4019 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/stopped-upgrade-996000/apiserver.crt.1b5cac53 ...
	I0814 09:47:13.283190    4019 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/stopped-upgrade-996000/apiserver.crt.1b5cac53: {Name:mk34f4324d8adb08a602260706cc47dfde65af01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:47:13.283504    4019 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/stopped-upgrade-996000/apiserver.key.1b5cac53 ...
	I0814 09:47:13.283510    4019 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/stopped-upgrade-996000/apiserver.key.1b5cac53: {Name:mk3dc04017450c5ab8112180685b26cf9d4c5148 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:47:13.283650    4019 certs.go:381] copying /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/stopped-upgrade-996000/apiserver.crt.1b5cac53 -> /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/stopped-upgrade-996000/apiserver.crt
	I0814 09:47:13.283801    4019 certs.go:385] copying /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/stopped-upgrade-996000/apiserver.key.1b5cac53 -> /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/stopped-upgrade-996000/apiserver.key
	I0814 09:47:13.283969    4019 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/stopped-upgrade-996000/proxy-client.key
	I0814 09:47:13.284101    4019 certs.go:484] found cert: /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/1600.pem (1338 bytes)
	W0814 09:47:13.284133    4019 certs.go:480] ignoring /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/1600_empty.pem, impossibly tiny 0 bytes
	I0814 09:47:13.284139    4019 certs.go:484] found cert: /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca-key.pem (1675 bytes)
	I0814 09:47:13.284165    4019 certs.go:484] found cert: /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem (1082 bytes)
	I0814 09:47:13.284192    4019 certs.go:484] found cert: /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/cert.pem (1123 bytes)
	I0814 09:47:13.284225    4019 certs.go:484] found cert: /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/key.pem (1675 bytes)
	I0814 09:47:13.284277    4019 certs.go:484] found cert: /Users/jenkins/minikube-integration/19446-1067/.minikube/files/etc/ssl/certs/16002.pem (1708 bytes)
	I0814 09:47:13.284674    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 09:47:13.293162    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 09:47:13.301683    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 09:47:13.313966    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0814 09:47:13.322138    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/stopped-upgrade-996000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0814 09:47:13.330365    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/stopped-upgrade-996000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0814 09:47:13.338617    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/stopped-upgrade-996000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 09:47:13.346628    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/stopped-upgrade-996000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 09:47:13.355067    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/files/etc/ssl/certs/16002.pem --> /usr/share/ca-certificates/16002.pem (1708 bytes)
	I0814 09:47:13.362761    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 09:47:13.370424    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/1600.pem --> /usr/share/ca-certificates/1600.pem (1338 bytes)
	I0814 09:47:13.378908    4019 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 09:47:13.384822    4019 ssh_runner.go:195] Run: openssl version
	I0814 09:47:13.387292    4019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16002.pem && ln -fs /usr/share/ca-certificates/16002.pem /etc/ssl/certs/16002.pem"
	I0814 09:47:13.390733    4019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16002.pem
	I0814 09:47:13.392429    4019 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 14 16:16 /usr/share/ca-certificates/16002.pem
	I0814 09:47:13.392469    4019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16002.pem
	I0814 09:47:13.394559    4019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16002.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 09:47:13.398220    4019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 09:47:13.402263    4019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:47:13.406706    4019 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 14 16:10 /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:47:13.406849    4019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 09:47:13.409980    4019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 09:47:13.414096    4019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1600.pem && ln -fs /usr/share/ca-certificates/1600.pem /etc/ssl/certs/1600.pem"
	I0814 09:47:13.417934    4019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1600.pem
	I0814 09:47:13.419806    4019 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 14 16:16 /usr/share/ca-certificates/1600.pem
	I0814 09:47:13.419849    4019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1600.pem
	I0814 09:47:13.421859    4019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1600.pem /etc/ssl/certs/51391683.0"
	I0814 09:47:13.425665    4019 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 09:47:13.427523    4019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 09:47:13.429793    4019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 09:47:13.432297    4019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 09:47:13.434584    4019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 09:47:13.436781    4019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 09:47:13.439160    4019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 09:47:13.441402    4019 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-996000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50269 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-996000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0814 09:47:13.441492    4019 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0814 09:47:13.453961    4019 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 09:47:13.457759    4019 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 09:47:13.457767    4019 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 09:47:13.457810    4019 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 09:47:13.462155    4019 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 09:47:13.462426    4019 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-996000" does not appear in /Users/jenkins/minikube-integration/19446-1067/kubeconfig
	I0814 09:47:13.462477    4019 kubeconfig.go:62] /Users/jenkins/minikube-integration/19446-1067/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-996000" cluster setting kubeconfig missing "stopped-upgrade-996000" context setting]
	I0814 09:47:13.462609    4019 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19446-1067/kubeconfig: {Name:mkd5271b15535f495ab8e34d870e7dbcadc9c40a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:47:13.463022    4019 kapi.go:59] client config for stopped-upgrade-996000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/stopped-upgrade-996000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/stopped-upgrade-996000/client.key", CAFile:"/Users/jenkins/minikube-integration/19446-1067/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102907e30), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0814 09:47:13.463361    4019 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 09:47:13.466477    4019 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-996000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0814 09:47:13.466484    4019 kubeadm.go:1160] stopping kube-system containers ...
	I0814 09:47:13.466535    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0814 09:47:13.479471    4019 docker.go:483] Stopping containers: [af86f8f14004 1c40d2ec1695 e325fbc948d1 9575eb1a63d7 7e949e3a70a3 9a7859c188cb eee5979245b1 a41ec406c2ba]
	I0814 09:47:13.479540    4019 ssh_runner.go:195] Run: docker stop af86f8f14004 1c40d2ec1695 e325fbc948d1 9575eb1a63d7 7e949e3a70a3 9a7859c188cb eee5979245b1 a41ec406c2ba
	I0814 09:47:13.490780    4019 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 09:47:13.496975    4019 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 09:47:13.500730    4019 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 09:47:13.500738    4019 kubeadm.go:157] found existing configuration files:
	
	I0814 09:47:13.500778    4019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50269 /etc/kubernetes/admin.conf
	I0814 09:47:13.503940    4019 kubeadm.go:163] "https://control-plane.minikube.internal:50269" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50269 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 09:47:13.503999    4019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 09:47:13.508194    4019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50269 /etc/kubernetes/kubelet.conf
	I0814 09:47:13.513911    4019 kubeadm.go:163] "https://control-plane.minikube.internal:50269" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50269 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 09:47:13.513978    4019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 09:47:13.517470    4019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50269 /etc/kubernetes/controller-manager.conf
	I0814 09:47:13.520788    4019 kubeadm.go:163] "https://control-plane.minikube.internal:50269" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50269 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 09:47:13.520830    4019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 09:47:13.523679    4019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50269 /etc/kubernetes/scheduler.conf
	I0814 09:47:13.526359    4019 kubeadm.go:163] "https://control-plane.minikube.internal:50269" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50269 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 09:47:13.526396    4019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 09:47:13.529614    4019 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 09:47:13.532961    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 09:47:13.558040    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 09:47:13.881507    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 09:47:14.003409    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 09:47:14.035322    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 09:47:14.058412    4019 api_server.go:52] waiting for apiserver process to appear ...
	I0814 09:47:14.058477    4019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:47:14.560517    4019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:47:15.058637    4019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:47:15.063884    4019 api_server.go:72] duration metric: took 1.005552958s to wait for apiserver process to appear ...
	I0814 09:47:15.063895    4019 api_server.go:88] waiting for apiserver healthz status ...
	I0814 09:47:15.063907    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:47:20.065617    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:47:20.065640    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:47:25.065541    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:47:25.065576    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:47:30.065578    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:47:30.065630    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:47:35.066007    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:47:35.066055    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:47:40.066567    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:47:40.066645    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:47:45.068050    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:47:45.068123    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:47:50.069584    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:47:50.069675    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:47:55.071192    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:47:55.071277    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:48:00.072360    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:48:00.072409    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:48:05.074541    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:48:05.074595    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:48:10.076279    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:48:10.076365    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:48:15.077490    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:48:15.077656    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:48:15.090882    4019 logs.go:276] 2 containers: [a0e97d1f2e98 7e949e3a70a3]
	I0814 09:48:15.090957    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:48:15.102370    4019 logs.go:276] 2 containers: [53f6a02861b1 1c40d2ec1695]
	I0814 09:48:15.102440    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:48:15.113408    4019 logs.go:276] 1 containers: [c1a47e5fa9ce]
	I0814 09:48:15.113471    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:48:15.123832    4019 logs.go:276] 2 containers: [e69eb145d498 9575eb1a63d7]
	I0814 09:48:15.123903    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:48:15.134719    4019 logs.go:276] 1 containers: [4289049e4ff9]
	I0814 09:48:15.134783    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:48:15.145622    4019 logs.go:276] 2 containers: [4731b37cec82 af86f8f14004]
	I0814 09:48:15.145684    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:48:15.155639    4019 logs.go:276] 0 containers: []
	W0814 09:48:15.155652    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:48:15.155709    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:48:15.167116    4019 logs.go:276] 2 containers: [d58abbf5afb4 3e7bc04c8fa4]
	I0814 09:48:15.167139    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:48:15.167144    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:48:15.206654    4019 logs.go:123] Gathering logs for etcd [1c40d2ec1695] ...
	I0814 09:48:15.206689    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c40d2ec1695"
	I0814 09:48:15.222078    4019 logs.go:123] Gathering logs for storage-provisioner [3e7bc04c8fa4] ...
	I0814 09:48:15.222094    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e7bc04c8fa4"
	I0814 09:48:15.233164    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:48:15.233175    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:48:15.258673    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:48:15.258681    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:48:15.263078    4019 logs.go:123] Gathering logs for kube-proxy [4289049e4ff9] ...
	I0814 09:48:15.263087    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4289049e4ff9"
	I0814 09:48:15.274765    4019 logs.go:123] Gathering logs for kube-controller-manager [4731b37cec82] ...
	I0814 09:48:15.274776    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4731b37cec82"
	I0814 09:48:15.292122    4019 logs.go:123] Gathering logs for kube-controller-manager [af86f8f14004] ...
	I0814 09:48:15.292135    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af86f8f14004"
	I0814 09:48:15.307212    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:48:15.307226    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:48:15.390038    4019 logs.go:123] Gathering logs for kube-scheduler [e69eb145d498] ...
	I0814 09:48:15.390051    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e69eb145d498"
	I0814 09:48:15.402137    4019 logs.go:123] Gathering logs for storage-provisioner [d58abbf5afb4] ...
	I0814 09:48:15.402147    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58abbf5afb4"
	I0814 09:48:15.418769    4019 logs.go:123] Gathering logs for kube-scheduler [9575eb1a63d7] ...
	I0814 09:48:15.418779    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9575eb1a63d7"
	I0814 09:48:15.434780    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:48:15.434789    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:48:15.450175    4019 logs.go:123] Gathering logs for kube-apiserver [a0e97d1f2e98] ...
	I0814 09:48:15.450188    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e97d1f2e98"
	I0814 09:48:15.464173    4019 logs.go:123] Gathering logs for kube-apiserver [7e949e3a70a3] ...
	I0814 09:48:15.464182    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e949e3a70a3"
	I0814 09:48:15.491160    4019 logs.go:123] Gathering logs for etcd [53f6a02861b1] ...
	I0814 09:48:15.491169    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53f6a02861b1"
	I0814 09:48:15.504717    4019 logs.go:123] Gathering logs for coredns [c1a47e5fa9ce] ...
	I0814 09:48:15.504732    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a47e5fa9ce"
	I0814 09:48:18.017697    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:48:23.019887    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:48:23.020243    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:48:23.055829    4019 logs.go:276] 2 containers: [a0e97d1f2e98 7e949e3a70a3]
	I0814 09:48:23.055968    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:48:23.076652    4019 logs.go:276] 2 containers: [53f6a02861b1 1c40d2ec1695]
	I0814 09:48:23.076749    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:48:23.090792    4019 logs.go:276] 1 containers: [c1a47e5fa9ce]
	I0814 09:48:23.090873    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:48:23.103336    4019 logs.go:276] 2 containers: [e69eb145d498 9575eb1a63d7]
	I0814 09:48:23.103407    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:48:23.114812    4019 logs.go:276] 1 containers: [4289049e4ff9]
	I0814 09:48:23.114876    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:48:23.125843    4019 logs.go:276] 2 containers: [4731b37cec82 af86f8f14004]
	I0814 09:48:23.125920    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:48:23.136759    4019 logs.go:276] 0 containers: []
	W0814 09:48:23.136772    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:48:23.136835    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:48:23.148984    4019 logs.go:276] 2 containers: [d58abbf5afb4 3e7bc04c8fa4]
	I0814 09:48:23.149001    4019 logs.go:123] Gathering logs for kube-controller-manager [4731b37cec82] ...
	I0814 09:48:23.149006    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4731b37cec82"
	I0814 09:48:23.167121    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:48:23.167131    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:48:23.194279    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:48:23.194287    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:48:23.198543    4019 logs.go:123] Gathering logs for etcd [1c40d2ec1695] ...
	I0814 09:48:23.198550    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c40d2ec1695"
	I0814 09:48:23.213829    4019 logs.go:123] Gathering logs for kube-scheduler [9575eb1a63d7] ...
	I0814 09:48:23.213839    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9575eb1a63d7"
	I0814 09:48:23.229053    4019 logs.go:123] Gathering logs for kube-proxy [4289049e4ff9] ...
	I0814 09:48:23.229064    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4289049e4ff9"
	I0814 09:48:23.240994    4019 logs.go:123] Gathering logs for storage-provisioner [d58abbf5afb4] ...
	I0814 09:48:23.241009    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58abbf5afb4"
	I0814 09:48:23.252319    4019 logs.go:123] Gathering logs for storage-provisioner [3e7bc04c8fa4] ...
	I0814 09:48:23.252330    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e7bc04c8fa4"
	I0814 09:48:23.265970    4019 logs.go:123] Gathering logs for kube-controller-manager [af86f8f14004] ...
	I0814 09:48:23.265982    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af86f8f14004"
	I0814 09:48:23.280872    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:48:23.280885    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:48:23.292833    4019 logs.go:123] Gathering logs for kube-apiserver [a0e97d1f2e98] ...
	I0814 09:48:23.292844    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e97d1f2e98"
	I0814 09:48:23.306713    4019 logs.go:123] Gathering logs for kube-apiserver [7e949e3a70a3] ...
	I0814 09:48:23.306724    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e949e3a70a3"
	I0814 09:48:23.331343    4019 logs.go:123] Gathering logs for etcd [53f6a02861b1] ...
	I0814 09:48:23.331354    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53f6a02861b1"
	I0814 09:48:23.345453    4019 logs.go:123] Gathering logs for kube-scheduler [e69eb145d498] ...
	I0814 09:48:23.345467    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e69eb145d498"
	I0814 09:48:23.357724    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:48:23.357735    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:48:23.393988    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:48:23.393996    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:48:23.433984    4019 logs.go:123] Gathering logs for coredns [c1a47e5fa9ce] ...
	I0814 09:48:23.433998    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a47e5fa9ce"
	I0814 09:48:25.947552    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:48:30.950445    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:48:30.950864    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:48:30.990794    4019 logs.go:276] 2 containers: [a0e97d1f2e98 7e949e3a70a3]
	I0814 09:48:30.990944    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:48:31.009846    4019 logs.go:276] 2 containers: [53f6a02861b1 1c40d2ec1695]
	I0814 09:48:31.009943    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:48:31.024449    4019 logs.go:276] 1 containers: [c1a47e5fa9ce]
	I0814 09:48:31.024526    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:48:31.036087    4019 logs.go:276] 2 containers: [e69eb145d498 9575eb1a63d7]
	I0814 09:48:31.036159    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:48:31.046524    4019 logs.go:276] 1 containers: [4289049e4ff9]
	I0814 09:48:31.046601    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:48:31.057079    4019 logs.go:276] 2 containers: [4731b37cec82 af86f8f14004]
	I0814 09:48:31.057146    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:48:31.067325    4019 logs.go:276] 0 containers: []
	W0814 09:48:31.067335    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:48:31.067388    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:48:31.077620    4019 logs.go:276] 2 containers: [d58abbf5afb4 3e7bc04c8fa4]
	I0814 09:48:31.077636    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:48:31.077643    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:48:31.114591    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:48:31.114602    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:48:31.149659    4019 logs.go:123] Gathering logs for kube-apiserver [a0e97d1f2e98] ...
	I0814 09:48:31.149671    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e97d1f2e98"
	I0814 09:48:31.171431    4019 logs.go:123] Gathering logs for etcd [1c40d2ec1695] ...
	I0814 09:48:31.171442    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c40d2ec1695"
	I0814 09:48:31.194914    4019 logs.go:123] Gathering logs for kube-scheduler [9575eb1a63d7] ...
	I0814 09:48:31.194924    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9575eb1a63d7"
	I0814 09:48:31.210087    4019 logs.go:123] Gathering logs for storage-provisioner [d58abbf5afb4] ...
	I0814 09:48:31.210097    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58abbf5afb4"
	I0814 09:48:31.222113    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:48:31.222124    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:48:31.233784    4019 logs.go:123] Gathering logs for etcd [53f6a02861b1] ...
	I0814 09:48:31.233797    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53f6a02861b1"
	I0814 09:48:31.248110    4019 logs.go:123] Gathering logs for kube-proxy [4289049e4ff9] ...
	I0814 09:48:31.248121    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4289049e4ff9"
	I0814 09:48:31.259918    4019 logs.go:123] Gathering logs for kube-controller-manager [4731b37cec82] ...
	I0814 09:48:31.259927    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4731b37cec82"
	I0814 09:48:31.278001    4019 logs.go:123] Gathering logs for kube-controller-manager [af86f8f14004] ...
	I0814 09:48:31.278012    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af86f8f14004"
	I0814 09:48:31.291978    4019 logs.go:123] Gathering logs for storage-provisioner [3e7bc04c8fa4] ...
	I0814 09:48:31.291988    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e7bc04c8fa4"
	I0814 09:48:31.303129    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:48:31.303139    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:48:31.327261    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:48:31.327272    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:48:31.331517    4019 logs.go:123] Gathering logs for kube-apiserver [7e949e3a70a3] ...
	I0814 09:48:31.331526    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e949e3a70a3"
	I0814 09:48:31.358052    4019 logs.go:123] Gathering logs for coredns [c1a47e5fa9ce] ...
	I0814 09:48:31.358063    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a47e5fa9ce"
	I0814 09:48:31.369581    4019 logs.go:123] Gathering logs for kube-scheduler [e69eb145d498] ...
	I0814 09:48:31.369593    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e69eb145d498"
	I0814 09:48:33.884048    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:48:38.886319    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:48:38.886516    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:48:38.905506    4019 logs.go:276] 2 containers: [a0e97d1f2e98 7e949e3a70a3]
	I0814 09:48:38.905601    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:48:38.919367    4019 logs.go:276] 2 containers: [53f6a02861b1 1c40d2ec1695]
	I0814 09:48:38.919454    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:48:38.931198    4019 logs.go:276] 1 containers: [c1a47e5fa9ce]
	I0814 09:48:38.931266    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:48:38.941806    4019 logs.go:276] 2 containers: [e69eb145d498 9575eb1a63d7]
	I0814 09:48:38.941880    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:48:38.952672    4019 logs.go:276] 1 containers: [4289049e4ff9]
	I0814 09:48:38.952744    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:48:38.963815    4019 logs.go:276] 2 containers: [4731b37cec82 af86f8f14004]
	I0814 09:48:38.963885    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:48:38.979802    4019 logs.go:276] 0 containers: []
	W0814 09:48:38.979812    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:48:38.979865    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:48:38.990234    4019 logs.go:276] 2 containers: [d58abbf5afb4 3e7bc04c8fa4]
	I0814 09:48:38.990255    4019 logs.go:123] Gathering logs for storage-provisioner [d58abbf5afb4] ...
	I0814 09:48:38.990260    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58abbf5afb4"
	I0814 09:48:39.004234    4019 logs.go:123] Gathering logs for storage-provisioner [3e7bc04c8fa4] ...
	I0814 09:48:39.004246    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e7bc04c8fa4"
	I0814 09:48:39.016191    4019 logs.go:123] Gathering logs for etcd [53f6a02861b1] ...
	I0814 09:48:39.016205    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53f6a02861b1"
	I0814 09:48:39.030088    4019 logs.go:123] Gathering logs for kube-scheduler [e69eb145d498] ...
	I0814 09:48:39.030097    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e69eb145d498"
	I0814 09:48:39.042310    4019 logs.go:123] Gathering logs for kube-controller-manager [af86f8f14004] ...
	I0814 09:48:39.042328    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af86f8f14004"
	I0814 09:48:39.057029    4019 logs.go:123] Gathering logs for kube-controller-manager [4731b37cec82] ...
	I0814 09:48:39.057038    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4731b37cec82"
	I0814 09:48:39.074208    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:48:39.074218    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:48:39.113369    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:48:39.113388    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:48:39.117985    4019 logs.go:123] Gathering logs for etcd [1c40d2ec1695] ...
	I0814 09:48:39.117999    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c40d2ec1695"
	I0814 09:48:39.133556    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:48:39.133571    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:48:39.146060    4019 logs.go:123] Gathering logs for kube-apiserver [7e949e3a70a3] ...
	I0814 09:48:39.146071    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e949e3a70a3"
	I0814 09:48:39.171743    4019 logs.go:123] Gathering logs for coredns [c1a47e5fa9ce] ...
	I0814 09:48:39.171753    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a47e5fa9ce"
	I0814 09:48:39.185376    4019 logs.go:123] Gathering logs for kube-scheduler [9575eb1a63d7] ...
	I0814 09:48:39.185386    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9575eb1a63d7"
	I0814 09:48:39.208586    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:48:39.208597    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:48:39.232768    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:48:39.232778    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:48:39.267458    4019 logs.go:123] Gathering logs for kube-apiserver [a0e97d1f2e98] ...
	I0814 09:48:39.267472    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e97d1f2e98"
	I0814 09:48:39.282010    4019 logs.go:123] Gathering logs for kube-proxy [4289049e4ff9] ...
	I0814 09:48:39.282022    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4289049e4ff9"
	I0814 09:48:41.796044    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:48:46.798221    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:48:46.798472    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:48:46.821604    4019 logs.go:276] 2 containers: [a0e97d1f2e98 7e949e3a70a3]
	I0814 09:48:46.821708    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:48:46.837405    4019 logs.go:276] 2 containers: [53f6a02861b1 1c40d2ec1695]
	I0814 09:48:46.837494    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:48:46.850000    4019 logs.go:276] 1 containers: [c1a47e5fa9ce]
	I0814 09:48:46.850071    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:48:46.861194    4019 logs.go:276] 2 containers: [e69eb145d498 9575eb1a63d7]
	I0814 09:48:46.861270    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:48:46.871487    4019 logs.go:276] 1 containers: [4289049e4ff9]
	I0814 09:48:46.871556    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:48:46.882042    4019 logs.go:276] 2 containers: [4731b37cec82 af86f8f14004]
	I0814 09:48:46.882105    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:48:46.892252    4019 logs.go:276] 0 containers: []
	W0814 09:48:46.892262    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:48:46.892314    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:48:46.902773    4019 logs.go:276] 2 containers: [d58abbf5afb4 3e7bc04c8fa4]
	I0814 09:48:46.902792    4019 logs.go:123] Gathering logs for etcd [53f6a02861b1] ...
	I0814 09:48:46.902797    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53f6a02861b1"
	I0814 09:48:46.921881    4019 logs.go:123] Gathering logs for kube-scheduler [e69eb145d498] ...
	I0814 09:48:46.921892    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e69eb145d498"
	I0814 09:48:46.937959    4019 logs.go:123] Gathering logs for kube-controller-manager [4731b37cec82] ...
	I0814 09:48:46.937969    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4731b37cec82"
	I0814 09:48:46.955093    4019 logs.go:123] Gathering logs for kube-controller-manager [af86f8f14004] ...
	I0814 09:48:46.955103    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af86f8f14004"
	I0814 09:48:46.969427    4019 logs.go:123] Gathering logs for storage-provisioner [d58abbf5afb4] ...
	I0814 09:48:46.969442    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58abbf5afb4"
	I0814 09:48:46.980807    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:48:46.980818    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:48:46.992811    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:48:46.992821    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:48:47.031837    4019 logs.go:123] Gathering logs for kube-apiserver [7e949e3a70a3] ...
	I0814 09:48:47.031846    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e949e3a70a3"
	I0814 09:48:47.059228    4019 logs.go:123] Gathering logs for coredns [c1a47e5fa9ce] ...
	I0814 09:48:47.059239    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a47e5fa9ce"
	I0814 09:48:47.070189    4019 logs.go:123] Gathering logs for storage-provisioner [3e7bc04c8fa4] ...
	I0814 09:48:47.070201    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e7bc04c8fa4"
	I0814 09:48:47.081457    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:48:47.081468    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:48:47.085777    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:48:47.085786    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:48:47.121778    4019 logs.go:123] Gathering logs for kube-scheduler [9575eb1a63d7] ...
	I0814 09:48:47.121790    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9575eb1a63d7"
	I0814 09:48:47.138094    4019 logs.go:123] Gathering logs for kube-proxy [4289049e4ff9] ...
	I0814 09:48:47.138105    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4289049e4ff9"
	I0814 09:48:47.155159    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:48:47.155174    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:48:47.179459    4019 logs.go:123] Gathering logs for kube-apiserver [a0e97d1f2e98] ...
	I0814 09:48:47.179473    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e97d1f2e98"
	I0814 09:48:47.193870    4019 logs.go:123] Gathering logs for etcd [1c40d2ec1695] ...
	I0814 09:48:47.193879    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c40d2ec1695"
	I0814 09:48:49.711892    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:48:54.713903    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:48:54.714076    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:48:54.729761    4019 logs.go:276] 2 containers: [a0e97d1f2e98 7e949e3a70a3]
	I0814 09:48:54.729833    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:48:54.740288    4019 logs.go:276] 2 containers: [53f6a02861b1 1c40d2ec1695]
	I0814 09:48:54.740352    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:48:54.750349    4019 logs.go:276] 1 containers: [c1a47e5fa9ce]
	I0814 09:48:54.750421    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:48:54.761345    4019 logs.go:276] 2 containers: [e69eb145d498 9575eb1a63d7]
	I0814 09:48:54.761427    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:48:54.772398    4019 logs.go:276] 1 containers: [4289049e4ff9]
	I0814 09:48:54.772467    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:48:54.788190    4019 logs.go:276] 2 containers: [4731b37cec82 af86f8f14004]
	I0814 09:48:54.788256    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:48:54.797655    4019 logs.go:276] 0 containers: []
	W0814 09:48:54.797665    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:48:54.797719    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:48:54.816634    4019 logs.go:276] 2 containers: [d58abbf5afb4 3e7bc04c8fa4]
	I0814 09:48:54.816650    4019 logs.go:123] Gathering logs for kube-proxy [4289049e4ff9] ...
	I0814 09:48:54.816656    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4289049e4ff9"
	I0814 09:48:54.828280    4019 logs.go:123] Gathering logs for kube-scheduler [9575eb1a63d7] ...
	I0814 09:48:54.828292    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9575eb1a63d7"
	I0814 09:48:54.847793    4019 logs.go:123] Gathering logs for kube-controller-manager [af86f8f14004] ...
	I0814 09:48:54.847803    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af86f8f14004"
	I0814 09:48:54.862339    4019 logs.go:123] Gathering logs for storage-provisioner [d58abbf5afb4] ...
	I0814 09:48:54.862349    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58abbf5afb4"
	I0814 09:48:54.874340    4019 logs.go:123] Gathering logs for storage-provisioner [3e7bc04c8fa4] ...
	I0814 09:48:54.874350    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e7bc04c8fa4"
	I0814 09:48:54.885291    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:48:54.885302    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:48:54.889973    4019 logs.go:123] Gathering logs for kube-apiserver [7e949e3a70a3] ...
	I0814 09:48:54.889979    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e949e3a70a3"
	I0814 09:48:54.914012    4019 logs.go:123] Gathering logs for etcd [53f6a02861b1] ...
	I0814 09:48:54.914022    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53f6a02861b1"
	I0814 09:48:54.927636    4019 logs.go:123] Gathering logs for etcd [1c40d2ec1695] ...
	I0814 09:48:54.927645    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c40d2ec1695"
	I0814 09:48:54.941648    4019 logs.go:123] Gathering logs for coredns [c1a47e5fa9ce] ...
	I0814 09:48:54.941660    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a47e5fa9ce"
	I0814 09:48:54.952761    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:48:54.952774    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:48:54.978307    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:48:54.978315    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:48:55.015870    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:48:55.015876    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:48:55.050192    4019 logs.go:123] Gathering logs for kube-apiserver [a0e97d1f2e98] ...
	I0814 09:48:55.050203    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e97d1f2e98"
	I0814 09:48:55.065084    4019 logs.go:123] Gathering logs for kube-scheduler [e69eb145d498] ...
	I0814 09:48:55.065094    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e69eb145d498"
	I0814 09:48:55.076987    4019 logs.go:123] Gathering logs for kube-controller-manager [4731b37cec82] ...
	I0814 09:48:55.077000    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4731b37cec82"
	I0814 09:48:55.096636    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:48:55.096646    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:48:57.610654    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:49:02.613129    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:49:02.613484    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:49:02.647146    4019 logs.go:276] 2 containers: [a0e97d1f2e98 7e949e3a70a3]
	I0814 09:49:02.647282    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:49:02.666430    4019 logs.go:276] 2 containers: [53f6a02861b1 1c40d2ec1695]
	I0814 09:49:02.666526    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:49:02.680934    4019 logs.go:276] 1 containers: [c1a47e5fa9ce]
	I0814 09:49:02.681019    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:49:02.694850    4019 logs.go:276] 2 containers: [e69eb145d498 9575eb1a63d7]
	I0814 09:49:02.694926    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:49:02.705774    4019 logs.go:276] 1 containers: [4289049e4ff9]
	I0814 09:49:02.705842    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:49:02.720140    4019 logs.go:276] 2 containers: [4731b37cec82 af86f8f14004]
	I0814 09:49:02.720213    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:49:02.730291    4019 logs.go:276] 0 containers: []
	W0814 09:49:02.730303    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:49:02.730360    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:49:02.740775    4019 logs.go:276] 2 containers: [d58abbf5afb4 3e7bc04c8fa4]
	I0814 09:49:02.740791    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:49:02.740797    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:49:02.784831    4019 logs.go:123] Gathering logs for kube-apiserver [a0e97d1f2e98] ...
	I0814 09:49:02.784846    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e97d1f2e98"
	I0814 09:49:02.799825    4019 logs.go:123] Gathering logs for kube-apiserver [7e949e3a70a3] ...
	I0814 09:49:02.799837    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e949e3a70a3"
	I0814 09:49:02.824665    4019 logs.go:123] Gathering logs for coredns [c1a47e5fa9ce] ...
	I0814 09:49:02.824677    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a47e5fa9ce"
	I0814 09:49:02.835536    4019 logs.go:123] Gathering logs for kube-scheduler [e69eb145d498] ...
	I0814 09:49:02.835548    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e69eb145d498"
	I0814 09:49:02.847065    4019 logs.go:123] Gathering logs for storage-provisioner [3e7bc04c8fa4] ...
	I0814 09:49:02.847076    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e7bc04c8fa4"
	I0814 09:49:02.859124    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:49:02.859139    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:49:02.897523    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:49:02.897533    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:49:02.901689    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:49:02.901697    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:49:02.924650    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:49:02.924658    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:49:02.935740    4019 logs.go:123] Gathering logs for kube-controller-manager [af86f8f14004] ...
	I0814 09:49:02.935753    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af86f8f14004"
	I0814 09:49:02.949902    4019 logs.go:123] Gathering logs for storage-provisioner [d58abbf5afb4] ...
	I0814 09:49:02.949915    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58abbf5afb4"
	I0814 09:49:02.961910    4019 logs.go:123] Gathering logs for kube-scheduler [9575eb1a63d7] ...
	I0814 09:49:02.961921    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9575eb1a63d7"
	I0814 09:49:02.978496    4019 logs.go:123] Gathering logs for kube-proxy [4289049e4ff9] ...
	I0814 09:49:02.978508    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4289049e4ff9"
	I0814 09:49:02.990177    4019 logs.go:123] Gathering logs for kube-controller-manager [4731b37cec82] ...
	I0814 09:49:02.990191    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4731b37cec82"
	I0814 09:49:03.007810    4019 logs.go:123] Gathering logs for etcd [53f6a02861b1] ...
	I0814 09:49:03.007822    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53f6a02861b1"
	I0814 09:49:03.026819    4019 logs.go:123] Gathering logs for etcd [1c40d2ec1695] ...
	I0814 09:49:03.026829    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c40d2ec1695"
	I0814 09:49:05.541824    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:49:10.544108    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:49:10.544450    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:49:10.577910    4019 logs.go:276] 2 containers: [a0e97d1f2e98 7e949e3a70a3]
	I0814 09:49:10.578042    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:49:10.596152    4019 logs.go:276] 2 containers: [53f6a02861b1 1c40d2ec1695]
	I0814 09:49:10.596246    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:49:10.612241    4019 logs.go:276] 1 containers: [c1a47e5fa9ce]
	I0814 09:49:10.612313    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:49:10.623033    4019 logs.go:276] 2 containers: [e69eb145d498 9575eb1a63d7]
	I0814 09:49:10.623098    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:49:10.633317    4019 logs.go:276] 1 containers: [4289049e4ff9]
	I0814 09:49:10.633376    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:49:10.644113    4019 logs.go:276] 2 containers: [4731b37cec82 af86f8f14004]
	I0814 09:49:10.644187    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:49:10.659537    4019 logs.go:276] 0 containers: []
	W0814 09:49:10.659548    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:49:10.659601    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:49:10.670633    4019 logs.go:276] 2 containers: [d58abbf5afb4 3e7bc04c8fa4]
	I0814 09:49:10.670653    4019 logs.go:123] Gathering logs for storage-provisioner [3e7bc04c8fa4] ...
	I0814 09:49:10.670658    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e7bc04c8fa4"
	I0814 09:49:10.684653    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:49:10.684665    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:49:10.721620    4019 logs.go:123] Gathering logs for kube-apiserver [7e949e3a70a3] ...
	I0814 09:49:10.721629    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e949e3a70a3"
	I0814 09:49:10.746667    4019 logs.go:123] Gathering logs for coredns [c1a47e5fa9ce] ...
	I0814 09:49:10.746678    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a47e5fa9ce"
	I0814 09:49:10.757973    4019 logs.go:123] Gathering logs for kube-scheduler [9575eb1a63d7] ...
	I0814 09:49:10.757984    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9575eb1a63d7"
	I0814 09:49:10.772743    4019 logs.go:123] Gathering logs for kube-controller-manager [af86f8f14004] ...
	I0814 09:49:10.772753    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af86f8f14004"
	I0814 09:49:10.786694    4019 logs.go:123] Gathering logs for storage-provisioner [d58abbf5afb4] ...
	I0814 09:49:10.786704    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58abbf5afb4"
	I0814 09:49:10.799168    4019 logs.go:123] Gathering logs for kube-scheduler [e69eb145d498] ...
	I0814 09:49:10.799181    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e69eb145d498"
	I0814 09:49:10.810736    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:49:10.810747    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:49:10.846259    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:49:10.846270    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:49:10.872964    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:49:10.872975    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:49:10.885100    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:49:10.885118    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:49:10.889529    4019 logs.go:123] Gathering logs for kube-apiserver [a0e97d1f2e98] ...
	I0814 09:49:10.889536    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e97d1f2e98"
	I0814 09:49:10.903686    4019 logs.go:123] Gathering logs for etcd [53f6a02861b1] ...
	I0814 09:49:10.903706    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53f6a02861b1"
	I0814 09:49:10.918427    4019 logs.go:123] Gathering logs for etcd [1c40d2ec1695] ...
	I0814 09:49:10.918436    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c40d2ec1695"
	I0814 09:49:10.935164    4019 logs.go:123] Gathering logs for kube-proxy [4289049e4ff9] ...
	I0814 09:49:10.935174    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4289049e4ff9"
	I0814 09:49:10.947168    4019 logs.go:123] Gathering logs for kube-controller-manager [4731b37cec82] ...
	I0814 09:49:10.947178    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4731b37cec82"
	I0814 09:49:13.466539    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:49:18.468699    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:49:18.468961    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:49:18.494369    4019 logs.go:276] 2 containers: [a0e97d1f2e98 7e949e3a70a3]
	I0814 09:49:18.494477    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:49:18.512483    4019 logs.go:276] 2 containers: [53f6a02861b1 1c40d2ec1695]
	I0814 09:49:18.512566    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:49:18.527156    4019 logs.go:276] 1 containers: [c1a47e5fa9ce]
	I0814 09:49:18.527232    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:49:18.538158    4019 logs.go:276] 2 containers: [e69eb145d498 9575eb1a63d7]
	I0814 09:49:18.538250    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:49:18.551740    4019 logs.go:276] 1 containers: [4289049e4ff9]
	I0814 09:49:18.551809    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:49:18.567688    4019 logs.go:276] 2 containers: [4731b37cec82 af86f8f14004]
	I0814 09:49:18.567752    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:49:18.577825    4019 logs.go:276] 0 containers: []
	W0814 09:49:18.577836    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:49:18.577893    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:49:18.588410    4019 logs.go:276] 2 containers: [d58abbf5afb4 3e7bc04c8fa4]
	I0814 09:49:18.588427    4019 logs.go:123] Gathering logs for etcd [53f6a02861b1] ...
	I0814 09:49:18.588432    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53f6a02861b1"
	I0814 09:49:18.602004    4019 logs.go:123] Gathering logs for storage-provisioner [d58abbf5afb4] ...
	I0814 09:49:18.602018    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58abbf5afb4"
	I0814 09:49:18.613563    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:49:18.613574    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:49:18.637045    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:49:18.637052    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:49:18.674857    4019 logs.go:123] Gathering logs for kube-apiserver [7e949e3a70a3] ...
	I0814 09:49:18.674865    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e949e3a70a3"
	I0814 09:49:18.702346    4019 logs.go:123] Gathering logs for kube-scheduler [9575eb1a63d7] ...
	I0814 09:49:18.702357    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9575eb1a63d7"
	I0814 09:49:18.717224    4019 logs.go:123] Gathering logs for kube-controller-manager [4731b37cec82] ...
	I0814 09:49:18.717233    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4731b37cec82"
	I0814 09:49:18.735242    4019 logs.go:123] Gathering logs for kube-scheduler [e69eb145d498] ...
	I0814 09:49:18.735251    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e69eb145d498"
	I0814 09:49:18.747012    4019 logs.go:123] Gathering logs for kube-proxy [4289049e4ff9] ...
	I0814 09:49:18.747023    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4289049e4ff9"
	I0814 09:49:18.758819    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:49:18.758830    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:49:18.762762    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:49:18.762771    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:49:18.797268    4019 logs.go:123] Gathering logs for etcd [1c40d2ec1695] ...
	I0814 09:49:18.797282    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c40d2ec1695"
	I0814 09:49:18.818358    4019 logs.go:123] Gathering logs for coredns [c1a47e5fa9ce] ...
	I0814 09:49:18.818368    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a47e5fa9ce"
	I0814 09:49:18.844576    4019 logs.go:123] Gathering logs for kube-apiserver [a0e97d1f2e98] ...
	I0814 09:49:18.844591    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e97d1f2e98"
	I0814 09:49:18.864915    4019 logs.go:123] Gathering logs for kube-controller-manager [af86f8f14004] ...
	I0814 09:49:18.864926    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af86f8f14004"
	I0814 09:49:18.882573    4019 logs.go:123] Gathering logs for storage-provisioner [3e7bc04c8fa4] ...
	I0814 09:49:18.882584    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e7bc04c8fa4"
	I0814 09:49:18.898474    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:49:18.898488    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:49:21.412135    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:49:26.414130    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:49:26.414327    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:49:26.432327    4019 logs.go:276] 2 containers: [a0e97d1f2e98 7e949e3a70a3]
	I0814 09:49:26.432417    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:49:26.448522    4019 logs.go:276] 2 containers: [53f6a02861b1 1c40d2ec1695]
	I0814 09:49:26.448598    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:49:26.460361    4019 logs.go:276] 1 containers: [c1a47e5fa9ce]
	I0814 09:49:26.460441    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:49:26.471102    4019 logs.go:276] 2 containers: [e69eb145d498 9575eb1a63d7]
	I0814 09:49:26.471167    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:49:26.481598    4019 logs.go:276] 1 containers: [4289049e4ff9]
	I0814 09:49:26.481658    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:49:26.493414    4019 logs.go:276] 2 containers: [4731b37cec82 af86f8f14004]
	I0814 09:49:26.493512    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:49:26.503784    4019 logs.go:276] 0 containers: []
	W0814 09:49:26.503796    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:49:26.503853    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:49:26.514270    4019 logs.go:276] 2 containers: [d58abbf5afb4 3e7bc04c8fa4]
	I0814 09:49:26.514288    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:49:26.514293    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:49:26.518488    4019 logs.go:123] Gathering logs for etcd [1c40d2ec1695] ...
	I0814 09:49:26.518495    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c40d2ec1695"
	I0814 09:49:26.532528    4019 logs.go:123] Gathering logs for kube-proxy [4289049e4ff9] ...
	I0814 09:49:26.532538    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4289049e4ff9"
	I0814 09:49:26.543982    4019 logs.go:123] Gathering logs for storage-provisioner [3e7bc04c8fa4] ...
	I0814 09:49:26.543992    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e7bc04c8fa4"
	I0814 09:49:26.556610    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:49:26.556622    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:49:26.569445    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:49:26.569456    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:49:26.603654    4019 logs.go:123] Gathering logs for kube-apiserver [a0e97d1f2e98] ...
	I0814 09:49:26.603665    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e97d1f2e98"
	I0814 09:49:26.621177    4019 logs.go:123] Gathering logs for kube-scheduler [9575eb1a63d7] ...
	I0814 09:49:26.621189    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9575eb1a63d7"
	I0814 09:49:26.636296    4019 logs.go:123] Gathering logs for storage-provisioner [d58abbf5afb4] ...
	I0814 09:49:26.636307    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58abbf5afb4"
	I0814 09:49:26.649591    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:49:26.649602    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:49:26.674317    4019 logs.go:123] Gathering logs for kube-apiserver [7e949e3a70a3] ...
	I0814 09:49:26.674328    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e949e3a70a3"
	I0814 09:49:26.701195    4019 logs.go:123] Gathering logs for etcd [53f6a02861b1] ...
	I0814 09:49:26.701207    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53f6a02861b1"
	I0814 09:49:26.715682    4019 logs.go:123] Gathering logs for coredns [c1a47e5fa9ce] ...
	I0814 09:49:26.715692    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a47e5fa9ce"
	I0814 09:49:26.727327    4019 logs.go:123] Gathering logs for kube-controller-manager [af86f8f14004] ...
	I0814 09:49:26.727340    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af86f8f14004"
	I0814 09:49:26.741354    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:49:26.741365    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:49:26.780218    4019 logs.go:123] Gathering logs for kube-scheduler [e69eb145d498] ...
	I0814 09:49:26.780229    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e69eb145d498"
	I0814 09:49:26.800135    4019 logs.go:123] Gathering logs for kube-controller-manager [4731b37cec82] ...
	I0814 09:49:26.800145    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4731b37cec82"
	I0814 09:49:29.320205    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:49:34.322199    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:49:34.322375    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:49:34.340332    4019 logs.go:276] 2 containers: [a0e97d1f2e98 7e949e3a70a3]
	I0814 09:49:34.340433    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:49:34.353939    4019 logs.go:276] 2 containers: [53f6a02861b1 1c40d2ec1695]
	I0814 09:49:34.354013    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:49:34.365122    4019 logs.go:276] 1 containers: [c1a47e5fa9ce]
	I0814 09:49:34.365180    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:49:34.375574    4019 logs.go:276] 2 containers: [e69eb145d498 9575eb1a63d7]
	I0814 09:49:34.375650    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:49:34.387727    4019 logs.go:276] 1 containers: [4289049e4ff9]
	I0814 09:49:34.387792    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:49:34.398919    4019 logs.go:276] 2 containers: [4731b37cec82 af86f8f14004]
	I0814 09:49:34.398986    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:49:34.409493    4019 logs.go:276] 0 containers: []
	W0814 09:49:34.409505    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:49:34.409561    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:49:34.419767    4019 logs.go:276] 2 containers: [d58abbf5afb4 3e7bc04c8fa4]
	I0814 09:49:34.419783    4019 logs.go:123] Gathering logs for etcd [53f6a02861b1] ...
	I0814 09:49:34.419788    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53f6a02861b1"
	I0814 09:49:34.433110    4019 logs.go:123] Gathering logs for kube-scheduler [e69eb145d498] ...
	I0814 09:49:34.433120    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e69eb145d498"
	I0814 09:49:34.444974    4019 logs.go:123] Gathering logs for kube-scheduler [9575eb1a63d7] ...
	I0814 09:49:34.444984    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9575eb1a63d7"
	I0814 09:49:34.459571    4019 logs.go:123] Gathering logs for kube-proxy [4289049e4ff9] ...
	I0814 09:49:34.459579    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4289049e4ff9"
	I0814 09:49:34.471099    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:49:34.471107    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:49:34.495038    4019 logs.go:123] Gathering logs for kube-apiserver [a0e97d1f2e98] ...
	I0814 09:49:34.495049    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e97d1f2e98"
	I0814 09:49:34.509438    4019 logs.go:123] Gathering logs for kube-apiserver [7e949e3a70a3] ...
	I0814 09:49:34.509451    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e949e3a70a3"
	I0814 09:49:34.534727    4019 logs.go:123] Gathering logs for storage-provisioner [d58abbf5afb4] ...
	I0814 09:49:34.534740    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58abbf5afb4"
	I0814 09:49:34.545828    4019 logs.go:123] Gathering logs for storage-provisioner [3e7bc04c8fa4] ...
	I0814 09:49:34.545837    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e7bc04c8fa4"
	I0814 09:49:34.557438    4019 logs.go:123] Gathering logs for kube-controller-manager [4731b37cec82] ...
	I0814 09:49:34.557449    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4731b37cec82"
	I0814 09:49:34.575315    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:49:34.575330    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:49:34.613180    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:49:34.613187    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:49:34.646528    4019 logs.go:123] Gathering logs for etcd [1c40d2ec1695] ...
	I0814 09:49:34.646542    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c40d2ec1695"
	I0814 09:49:34.666018    4019 logs.go:123] Gathering logs for coredns [c1a47e5fa9ce] ...
	I0814 09:49:34.666031    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a47e5fa9ce"
	I0814 09:49:34.677611    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:49:34.677621    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:49:34.682985    4019 logs.go:123] Gathering logs for kube-controller-manager [af86f8f14004] ...
	I0814 09:49:34.682997    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af86f8f14004"
	I0814 09:49:34.696984    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:49:34.696993    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:49:37.212365    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:49:42.214178    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:49:42.214280    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:49:42.225907    4019 logs.go:276] 2 containers: [a0e97d1f2e98 7e949e3a70a3]
	I0814 09:49:42.225991    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:49:42.238517    4019 logs.go:276] 2 containers: [53f6a02861b1 1c40d2ec1695]
	I0814 09:49:42.238582    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:49:42.248823    4019 logs.go:276] 1 containers: [c1a47e5fa9ce]
	I0814 09:49:42.248888    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:49:42.259844    4019 logs.go:276] 2 containers: [e69eb145d498 9575eb1a63d7]
	I0814 09:49:42.259915    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:49:42.269984    4019 logs.go:276] 1 containers: [4289049e4ff9]
	I0814 09:49:42.270052    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:49:42.280693    4019 logs.go:276] 2 containers: [4731b37cec82 af86f8f14004]
	I0814 09:49:42.280761    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:49:42.291322    4019 logs.go:276] 0 containers: []
	W0814 09:49:42.291333    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:49:42.291389    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:49:42.301486    4019 logs.go:276] 2 containers: [d58abbf5afb4 3e7bc04c8fa4]
	I0814 09:49:42.301504    4019 logs.go:123] Gathering logs for etcd [1c40d2ec1695] ...
	I0814 09:49:42.301510    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c40d2ec1695"
	I0814 09:49:42.316086    4019 logs.go:123] Gathering logs for coredns [c1a47e5fa9ce] ...
	I0814 09:49:42.316100    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a47e5fa9ce"
	I0814 09:49:42.327454    4019 logs.go:123] Gathering logs for kube-scheduler [9575eb1a63d7] ...
	I0814 09:49:42.327468    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9575eb1a63d7"
	I0814 09:49:42.343044    4019 logs.go:123] Gathering logs for kube-controller-manager [4731b37cec82] ...
	I0814 09:49:42.343054    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4731b37cec82"
	I0814 09:49:42.361001    4019 logs.go:123] Gathering logs for storage-provisioner [3e7bc04c8fa4] ...
	I0814 09:49:42.361011    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e7bc04c8fa4"
	I0814 09:49:42.372427    4019 logs.go:123] Gathering logs for etcd [53f6a02861b1] ...
	I0814 09:49:42.372438    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53f6a02861b1"
	I0814 09:49:42.387756    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:49:42.387769    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:49:42.422013    4019 logs.go:123] Gathering logs for kube-apiserver [a0e97d1f2e98] ...
	I0814 09:49:42.422024    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e97d1f2e98"
	I0814 09:49:42.437390    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:49:42.437403    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:49:42.449667    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:49:42.449681    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:49:42.453916    4019 logs.go:123] Gathering logs for kube-apiserver [7e949e3a70a3] ...
	I0814 09:49:42.453924    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e949e3a70a3"
	I0814 09:49:42.478831    4019 logs.go:123] Gathering logs for kube-scheduler [e69eb145d498] ...
	I0814 09:49:42.478841    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e69eb145d498"
	I0814 09:49:42.491174    4019 logs.go:123] Gathering logs for kube-controller-manager [af86f8f14004] ...
	I0814 09:49:42.491188    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af86f8f14004"
	I0814 09:49:42.504783    4019 logs.go:123] Gathering logs for storage-provisioner [d58abbf5afb4] ...
	I0814 09:49:42.504795    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58abbf5afb4"
	I0814 09:49:42.516521    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:49:42.516535    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:49:42.555655    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:49:42.555664    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:49:42.579955    4019 logs.go:123] Gathering logs for kube-proxy [4289049e4ff9] ...
	I0814 09:49:42.579963    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4289049e4ff9"
	I0814 09:49:45.093367    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:49:50.095576    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:49:50.095974    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:49:50.149971    4019 logs.go:276] 2 containers: [a0e97d1f2e98 7e949e3a70a3]
	I0814 09:49:50.150089    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:49:50.172255    4019 logs.go:276] 2 containers: [53f6a02861b1 1c40d2ec1695]
	I0814 09:49:50.172337    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:49:50.186744    4019 logs.go:276] 1 containers: [c1a47e5fa9ce]
	I0814 09:49:50.186818    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:49:50.197532    4019 logs.go:276] 2 containers: [e69eb145d498 9575eb1a63d7]
	I0814 09:49:50.197607    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:49:50.208293    4019 logs.go:276] 1 containers: [4289049e4ff9]
	I0814 09:49:50.208367    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:49:50.219654    4019 logs.go:276] 2 containers: [4731b37cec82 af86f8f14004]
	I0814 09:49:50.219715    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:49:50.234306    4019 logs.go:276] 0 containers: []
	W0814 09:49:50.234318    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:49:50.234384    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:49:50.245123    4019 logs.go:276] 2 containers: [d58abbf5afb4 3e7bc04c8fa4]
	I0814 09:49:50.245142    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:49:50.245149    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:49:50.268657    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:49:50.268665    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:49:50.302013    4019 logs.go:123] Gathering logs for etcd [1c40d2ec1695] ...
	I0814 09:49:50.302024    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c40d2ec1695"
	I0814 09:49:50.316988    4019 logs.go:123] Gathering logs for coredns [c1a47e5fa9ce] ...
	I0814 09:49:50.317001    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a47e5fa9ce"
	I0814 09:49:50.329003    4019 logs.go:123] Gathering logs for kube-proxy [4289049e4ff9] ...
	I0814 09:49:50.329013    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4289049e4ff9"
	I0814 09:49:50.340684    4019 logs.go:123] Gathering logs for kube-controller-manager [af86f8f14004] ...
	I0814 09:49:50.340696    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af86f8f14004"
	I0814 09:49:50.356636    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:49:50.356646    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:49:50.361248    4019 logs.go:123] Gathering logs for kube-apiserver [a0e97d1f2e98] ...
	I0814 09:49:50.361255    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e97d1f2e98"
	I0814 09:49:50.375985    4019 logs.go:123] Gathering logs for etcd [53f6a02861b1] ...
	I0814 09:49:50.375997    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53f6a02861b1"
	I0814 09:49:50.390650    4019 logs.go:123] Gathering logs for kube-scheduler [9575eb1a63d7] ...
	I0814 09:49:50.390664    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9575eb1a63d7"
	I0814 09:49:50.405736    4019 logs.go:123] Gathering logs for kube-apiserver [7e949e3a70a3] ...
	I0814 09:49:50.405746    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e949e3a70a3"
	I0814 09:49:50.430557    4019 logs.go:123] Gathering logs for kube-scheduler [e69eb145d498] ...
	I0814 09:49:50.430568    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e69eb145d498"
	I0814 09:49:50.442496    4019 logs.go:123] Gathering logs for storage-provisioner [3e7bc04c8fa4] ...
	I0814 09:49:50.442508    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e7bc04c8fa4"
	I0814 09:49:50.454425    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:49:50.454436    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:49:50.465889    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:49:50.465898    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:49:50.501956    4019 logs.go:123] Gathering logs for kube-controller-manager [4731b37cec82] ...
	I0814 09:49:50.501971    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4731b37cec82"
	I0814 09:49:50.519605    4019 logs.go:123] Gathering logs for storage-provisioner [d58abbf5afb4] ...
	I0814 09:49:50.519618    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58abbf5afb4"
	I0814 09:49:53.033483    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:49:58.035609    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:49:58.036027    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:49:58.073365    4019 logs.go:276] 2 containers: [a0e97d1f2e98 7e949e3a70a3]
	I0814 09:49:58.073498    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:49:58.094510    4019 logs.go:276] 2 containers: [53f6a02861b1 1c40d2ec1695]
	I0814 09:49:58.094602    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:49:58.114559    4019 logs.go:276] 1 containers: [c1a47e5fa9ce]
	I0814 09:49:58.114634    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:49:58.126308    4019 logs.go:276] 2 containers: [e69eb145d498 9575eb1a63d7]
	I0814 09:49:58.126383    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:49:58.137090    4019 logs.go:276] 1 containers: [4289049e4ff9]
	I0814 09:49:58.137159    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:49:58.151796    4019 logs.go:276] 2 containers: [4731b37cec82 af86f8f14004]
	I0814 09:49:58.151862    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:49:58.162052    4019 logs.go:276] 0 containers: []
	W0814 09:49:58.162065    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:49:58.162122    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:49:58.172929    4019 logs.go:276] 2 containers: [d58abbf5afb4 3e7bc04c8fa4]
	I0814 09:49:58.172975    4019 logs.go:123] Gathering logs for storage-provisioner [3e7bc04c8fa4] ...
	I0814 09:49:58.172980    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e7bc04c8fa4"
	I0814 09:49:58.184431    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:49:58.184446    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:49:58.196353    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:49:58.196365    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:49:58.236517    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:49:58.236532    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:49:58.270821    4019 logs.go:123] Gathering logs for kube-apiserver [a0e97d1f2e98] ...
	I0814 09:49:58.270836    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e97d1f2e98"
	I0814 09:49:58.285142    4019 logs.go:123] Gathering logs for etcd [53f6a02861b1] ...
	I0814 09:49:58.285154    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53f6a02861b1"
	I0814 09:49:58.299177    4019 logs.go:123] Gathering logs for etcd [1c40d2ec1695] ...
	I0814 09:49:58.299187    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c40d2ec1695"
	I0814 09:49:58.313388    4019 logs.go:123] Gathering logs for kube-controller-manager [af86f8f14004] ...
	I0814 09:49:58.313400    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af86f8f14004"
	I0814 09:49:58.327187    4019 logs.go:123] Gathering logs for storage-provisioner [d58abbf5afb4] ...
	I0814 09:49:58.327199    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58abbf5afb4"
	I0814 09:49:58.340570    4019 logs.go:123] Gathering logs for kube-apiserver [7e949e3a70a3] ...
	I0814 09:49:58.340580    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e949e3a70a3"
	I0814 09:49:58.366597    4019 logs.go:123] Gathering logs for coredns [c1a47e5fa9ce] ...
	I0814 09:49:58.366607    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a47e5fa9ce"
	I0814 09:49:58.378105    4019 logs.go:123] Gathering logs for kube-scheduler [9575eb1a63d7] ...
	I0814 09:49:58.378115    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9575eb1a63d7"
	I0814 09:49:58.393232    4019 logs.go:123] Gathering logs for kube-controller-manager [4731b37cec82] ...
	I0814 09:49:58.393242    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4731b37cec82"
	I0814 09:49:58.411030    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:49:58.411044    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:49:58.434995    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:49:58.435002    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:49:58.438924    4019 logs.go:123] Gathering logs for kube-scheduler [e69eb145d498] ...
	I0814 09:49:58.438930    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e69eb145d498"
	I0814 09:49:58.450322    4019 logs.go:123] Gathering logs for kube-proxy [4289049e4ff9] ...
	I0814 09:49:58.450332    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4289049e4ff9"
	I0814 09:50:00.972370    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:50:05.974464    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:50:05.974974    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:50:06.015446    4019 logs.go:276] 2 containers: [a0e97d1f2e98 7e949e3a70a3]
	I0814 09:50:06.015589    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:50:06.036997    4019 logs.go:276] 2 containers: [53f6a02861b1 1c40d2ec1695]
	I0814 09:50:06.037105    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:50:06.052488    4019 logs.go:276] 1 containers: [c1a47e5fa9ce]
	I0814 09:50:06.052570    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:50:06.065065    4019 logs.go:276] 2 containers: [e69eb145d498 9575eb1a63d7]
	I0814 09:50:06.065150    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:50:06.076210    4019 logs.go:276] 1 containers: [4289049e4ff9]
	I0814 09:50:06.076279    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:50:06.086942    4019 logs.go:276] 2 containers: [4731b37cec82 af86f8f14004]
	I0814 09:50:06.087017    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:50:06.104055    4019 logs.go:276] 0 containers: []
	W0814 09:50:06.104065    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:50:06.104123    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:50:06.115319    4019 logs.go:276] 2 containers: [d58abbf5afb4 3e7bc04c8fa4]
	I0814 09:50:06.115338    4019 logs.go:123] Gathering logs for kube-scheduler [e69eb145d498] ...
	I0814 09:50:06.115343    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e69eb145d498"
	I0814 09:50:06.127109    4019 logs.go:123] Gathering logs for kube-proxy [4289049e4ff9] ...
	I0814 09:50:06.127120    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4289049e4ff9"
	I0814 09:50:06.139397    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:50:06.139409    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:50:06.163699    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:50:06.163711    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:50:06.168546    4019 logs.go:123] Gathering logs for kube-apiserver [7e949e3a70a3] ...
	I0814 09:50:06.168553    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e949e3a70a3"
	I0814 09:50:06.197635    4019 logs.go:123] Gathering logs for etcd [1c40d2ec1695] ...
	I0814 09:50:06.197646    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c40d2ec1695"
	I0814 09:50:06.213490    4019 logs.go:123] Gathering logs for kube-apiserver [a0e97d1f2e98] ...
	I0814 09:50:06.213499    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e97d1f2e98"
	I0814 09:50:06.234810    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:50:06.234819    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:50:06.248715    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:50:06.248724    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:50:06.287168    4019 logs.go:123] Gathering logs for etcd [53f6a02861b1] ...
	I0814 09:50:06.287179    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53f6a02861b1"
	I0814 09:50:06.308170    4019 logs.go:123] Gathering logs for storage-provisioner [d58abbf5afb4] ...
	I0814 09:50:06.308180    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58abbf5afb4"
	I0814 09:50:06.319363    4019 logs.go:123] Gathering logs for kube-controller-manager [4731b37cec82] ...
	I0814 09:50:06.319376    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4731b37cec82"
	I0814 09:50:06.337642    4019 logs.go:123] Gathering logs for kube-controller-manager [af86f8f14004] ...
	I0814 09:50:06.337652    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af86f8f14004"
	I0814 09:50:06.351997    4019 logs.go:123] Gathering logs for storage-provisioner [3e7bc04c8fa4] ...
	I0814 09:50:06.352011    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e7bc04c8fa4"
	I0814 09:50:06.363508    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:50:06.363517    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:50:06.399163    4019 logs.go:123] Gathering logs for coredns [c1a47e5fa9ce] ...
	I0814 09:50:06.399178    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a47e5fa9ce"
	I0814 09:50:06.410844    4019 logs.go:123] Gathering logs for kube-scheduler [9575eb1a63d7] ...
	I0814 09:50:06.410855    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9575eb1a63d7"
	I0814 09:50:08.928179    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:50:13.930338    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:50:13.930702    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:50:13.977475    4019 logs.go:276] 2 containers: [a0e97d1f2e98 7e949e3a70a3]
	I0814 09:50:13.977632    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:50:13.997387    4019 logs.go:276] 2 containers: [53f6a02861b1 1c40d2ec1695]
	I0814 09:50:13.997478    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:50:14.012258    4019 logs.go:276] 1 containers: [c1a47e5fa9ce]
	I0814 09:50:14.012326    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:50:14.024548    4019 logs.go:276] 2 containers: [e69eb145d498 9575eb1a63d7]
	I0814 09:50:14.024613    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:50:14.035782    4019 logs.go:276] 1 containers: [4289049e4ff9]
	I0814 09:50:14.035855    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:50:14.047253    4019 logs.go:276] 2 containers: [4731b37cec82 af86f8f14004]
	I0814 09:50:14.047319    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:50:14.057313    4019 logs.go:276] 0 containers: []
	W0814 09:50:14.057324    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:50:14.057387    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:50:14.067596    4019 logs.go:276] 2 containers: [d58abbf5afb4 3e7bc04c8fa4]
	I0814 09:50:14.067616    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:50:14.067623    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:50:14.071944    4019 logs.go:123] Gathering logs for kube-apiserver [7e949e3a70a3] ...
	I0814 09:50:14.071951    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e949e3a70a3"
	I0814 09:50:14.097009    4019 logs.go:123] Gathering logs for etcd [1c40d2ec1695] ...
	I0814 09:50:14.097023    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c40d2ec1695"
	I0814 09:50:14.114239    4019 logs.go:123] Gathering logs for kube-scheduler [9575eb1a63d7] ...
	I0814 09:50:14.114252    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9575eb1a63d7"
	I0814 09:50:14.134417    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:50:14.134427    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:50:14.172952    4019 logs.go:123] Gathering logs for coredns [c1a47e5fa9ce] ...
	I0814 09:50:14.172961    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a47e5fa9ce"
	I0814 09:50:14.184600    4019 logs.go:123] Gathering logs for storage-provisioner [d58abbf5afb4] ...
	I0814 09:50:14.184611    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58abbf5afb4"
	I0814 09:50:14.196308    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:50:14.196318    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:50:14.208167    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:50:14.208179    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:50:14.242693    4019 logs.go:123] Gathering logs for kube-apiserver [a0e97d1f2e98] ...
	I0814 09:50:14.242704    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e97d1f2e98"
	I0814 09:50:14.257016    4019 logs.go:123] Gathering logs for etcd [53f6a02861b1] ...
	I0814 09:50:14.257026    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53f6a02861b1"
	I0814 09:50:14.280197    4019 logs.go:123] Gathering logs for kube-scheduler [e69eb145d498] ...
	I0814 09:50:14.280209    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e69eb145d498"
	I0814 09:50:14.295057    4019 logs.go:123] Gathering logs for kube-proxy [4289049e4ff9] ...
	I0814 09:50:14.295069    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4289049e4ff9"
	I0814 09:50:14.312023    4019 logs.go:123] Gathering logs for kube-controller-manager [4731b37cec82] ...
	I0814 09:50:14.312037    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4731b37cec82"
	I0814 09:50:14.337407    4019 logs.go:123] Gathering logs for kube-controller-manager [af86f8f14004] ...
	I0814 09:50:14.337418    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af86f8f14004"
	I0814 09:50:14.355998    4019 logs.go:123] Gathering logs for storage-provisioner [3e7bc04c8fa4] ...
	I0814 09:50:14.356012    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e7bc04c8fa4"
	I0814 09:50:14.367139    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:50:14.367150    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:50:16.893228    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:50:21.895238    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:50:21.895464    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:50:21.922299    4019 logs.go:276] 2 containers: [a0e97d1f2e98 7e949e3a70a3]
	I0814 09:50:21.922404    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:50:21.938054    4019 logs.go:276] 2 containers: [53f6a02861b1 1c40d2ec1695]
	I0814 09:50:21.938132    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:50:21.953720    4019 logs.go:276] 1 containers: [c1a47e5fa9ce]
	I0814 09:50:21.953787    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:50:21.964805    4019 logs.go:276] 2 containers: [e69eb145d498 9575eb1a63d7]
	I0814 09:50:21.964881    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:50:21.975412    4019 logs.go:276] 1 containers: [4289049e4ff9]
	I0814 09:50:21.975473    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:50:21.986268    4019 logs.go:276] 2 containers: [4731b37cec82 af86f8f14004]
	I0814 09:50:21.986330    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:50:21.996595    4019 logs.go:276] 0 containers: []
	W0814 09:50:21.996607    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:50:21.996665    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:50:22.013564    4019 logs.go:276] 2 containers: [d58abbf5afb4 3e7bc04c8fa4]
	I0814 09:50:22.013583    4019 logs.go:123] Gathering logs for kube-apiserver [7e949e3a70a3] ...
	I0814 09:50:22.013588    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e949e3a70a3"
	I0814 09:50:22.038365    4019 logs.go:123] Gathering logs for etcd [53f6a02861b1] ...
	I0814 09:50:22.038374    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53f6a02861b1"
	I0814 09:50:22.052804    4019 logs.go:123] Gathering logs for kube-proxy [4289049e4ff9] ...
	I0814 09:50:22.052813    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4289049e4ff9"
	I0814 09:50:22.069744    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:50:22.069756    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:50:22.093660    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:50:22.093670    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:50:22.105922    4019 logs.go:123] Gathering logs for kube-apiserver [a0e97d1f2e98] ...
	I0814 09:50:22.105933    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e97d1f2e98"
	I0814 09:50:22.119920    4019 logs.go:123] Gathering logs for kube-controller-manager [4731b37cec82] ...
	I0814 09:50:22.119932    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4731b37cec82"
	I0814 09:50:22.137063    4019 logs.go:123] Gathering logs for kube-controller-manager [af86f8f14004] ...
	I0814 09:50:22.137074    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af86f8f14004"
	I0814 09:50:22.151119    4019 logs.go:123] Gathering logs for storage-provisioner [d58abbf5afb4] ...
	I0814 09:50:22.151133    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58abbf5afb4"
	I0814 09:50:22.162962    4019 logs.go:123] Gathering logs for storage-provisioner [3e7bc04c8fa4] ...
	I0814 09:50:22.162975    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e7bc04c8fa4"
	I0814 09:50:22.181998    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:50:22.182008    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:50:22.216335    4019 logs.go:123] Gathering logs for coredns [c1a47e5fa9ce] ...
	I0814 09:50:22.216346    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a47e5fa9ce"
	I0814 09:50:22.227348    4019 logs.go:123] Gathering logs for kube-scheduler [e69eb145d498] ...
	I0814 09:50:22.227360    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e69eb145d498"
	I0814 09:50:22.238885    4019 logs.go:123] Gathering logs for kube-scheduler [9575eb1a63d7] ...
	I0814 09:50:22.238898    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9575eb1a63d7"
	I0814 09:50:22.254111    4019 logs.go:123] Gathering logs for etcd [1c40d2ec1695] ...
	I0814 09:50:22.254121    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c40d2ec1695"
	I0814 09:50:22.270695    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:50:22.270708    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:50:22.275499    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:50:22.275507    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:50:24.812292    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:50:29.814366    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:50:29.814756    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:50:29.845951    4019 logs.go:276] 2 containers: [a0e97d1f2e98 7e949e3a70a3]
	I0814 09:50:29.846077    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:50:29.864613    4019 logs.go:276] 2 containers: [53f6a02861b1 1c40d2ec1695]
	I0814 09:50:29.864718    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:50:29.878291    4019 logs.go:276] 1 containers: [c1a47e5fa9ce]
	I0814 09:50:29.878368    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:50:29.889897    4019 logs.go:276] 2 containers: [e69eb145d498 9575eb1a63d7]
	I0814 09:50:29.889974    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:50:29.900171    4019 logs.go:276] 1 containers: [4289049e4ff9]
	I0814 09:50:29.900239    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:50:29.910216    4019 logs.go:276] 2 containers: [4731b37cec82 af86f8f14004]
	I0814 09:50:29.910293    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:50:29.920721    4019 logs.go:276] 0 containers: []
	W0814 09:50:29.920733    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:50:29.920792    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:50:29.930807    4019 logs.go:276] 2 containers: [d58abbf5afb4 3e7bc04c8fa4]
	I0814 09:50:29.930823    4019 logs.go:123] Gathering logs for kube-apiserver [a0e97d1f2e98] ...
	I0814 09:50:29.930828    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e97d1f2e98"
	I0814 09:50:29.945151    4019 logs.go:123] Gathering logs for etcd [53f6a02861b1] ...
	I0814 09:50:29.945164    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53f6a02861b1"
	I0814 09:50:29.960337    4019 logs.go:123] Gathering logs for etcd [1c40d2ec1695] ...
	I0814 09:50:29.960348    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c40d2ec1695"
	I0814 09:50:29.975116    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:50:29.975127    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:50:30.011307    4019 logs.go:123] Gathering logs for kube-controller-manager [4731b37cec82] ...
	I0814 09:50:30.011318    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4731b37cec82"
	I0814 09:50:30.028963    4019 logs.go:123] Gathering logs for storage-provisioner [d58abbf5afb4] ...
	I0814 09:50:30.028976    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58abbf5afb4"
	I0814 09:50:30.042896    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:50:30.042907    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:50:30.055260    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:50:30.055272    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:50:30.059901    4019 logs.go:123] Gathering logs for kube-apiserver [7e949e3a70a3] ...
	I0814 09:50:30.059907    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e949e3a70a3"
	I0814 09:50:30.084892    4019 logs.go:123] Gathering logs for kube-scheduler [9575eb1a63d7] ...
	I0814 09:50:30.084902    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9575eb1a63d7"
	I0814 09:50:30.105205    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:50:30.105217    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:50:30.126917    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:50:30.126926    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:50:30.164203    4019 logs.go:123] Gathering logs for kube-scheduler [e69eb145d498] ...
	I0814 09:50:30.164213    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e69eb145d498"
	I0814 09:50:30.179356    4019 logs.go:123] Gathering logs for kube-proxy [4289049e4ff9] ...
	I0814 09:50:30.179367    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4289049e4ff9"
	I0814 09:50:30.191349    4019 logs.go:123] Gathering logs for kube-controller-manager [af86f8f14004] ...
	I0814 09:50:30.191363    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af86f8f14004"
	I0814 09:50:30.206144    4019 logs.go:123] Gathering logs for storage-provisioner [3e7bc04c8fa4] ...
	I0814 09:50:30.206158    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e7bc04c8fa4"
	I0814 09:50:30.224826    4019 logs.go:123] Gathering logs for coredns [c1a47e5fa9ce] ...
	I0814 09:50:30.224840    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a47e5fa9ce"
	I0814 09:50:32.736648    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:50:37.738627    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:50:37.738744    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:50:37.750382    4019 logs.go:276] 2 containers: [a0e97d1f2e98 7e949e3a70a3]
	I0814 09:50:37.750452    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:50:37.762089    4019 logs.go:276] 2 containers: [53f6a02861b1 1c40d2ec1695]
	I0814 09:50:37.762166    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:50:37.773225    4019 logs.go:276] 1 containers: [c1a47e5fa9ce]
	I0814 09:50:37.773305    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:50:37.784541    4019 logs.go:276] 2 containers: [e69eb145d498 9575eb1a63d7]
	I0814 09:50:37.784612    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:50:37.795543    4019 logs.go:276] 1 containers: [4289049e4ff9]
	I0814 09:50:37.795614    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:50:37.807317    4019 logs.go:276] 2 containers: [4731b37cec82 af86f8f14004]
	I0814 09:50:37.807392    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:50:37.818298    4019 logs.go:276] 0 containers: []
	W0814 09:50:37.818311    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:50:37.818376    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:50:37.829696    4019 logs.go:276] 2 containers: [d58abbf5afb4 3e7bc04c8fa4]
	I0814 09:50:37.829713    4019 logs.go:123] Gathering logs for kube-controller-manager [af86f8f14004] ...
	I0814 09:50:37.829719    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af86f8f14004"
	I0814 09:50:37.848876    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:50:37.848887    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:50:37.853614    4019 logs.go:123] Gathering logs for coredns [c1a47e5fa9ce] ...
	I0814 09:50:37.853622    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a47e5fa9ce"
	I0814 09:50:37.869099    4019 logs.go:123] Gathering logs for storage-provisioner [d58abbf5afb4] ...
	I0814 09:50:37.869111    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58abbf5afb4"
	I0814 09:50:37.881253    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:50:37.881266    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:50:37.906063    4019 logs.go:123] Gathering logs for etcd [53f6a02861b1] ...
	I0814 09:50:37.906077    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53f6a02861b1"
	I0814 09:50:37.920987    4019 logs.go:123] Gathering logs for kube-scheduler [9575eb1a63d7] ...
	I0814 09:50:37.920998    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9575eb1a63d7"
	I0814 09:50:37.936679    4019 logs.go:123] Gathering logs for kube-proxy [4289049e4ff9] ...
	I0814 09:50:37.936691    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4289049e4ff9"
	I0814 09:50:37.954988    4019 logs.go:123] Gathering logs for storage-provisioner [3e7bc04c8fa4] ...
	I0814 09:50:37.955004    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e7bc04c8fa4"
	I0814 09:50:37.967086    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:50:37.967098    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:50:37.980382    4019 logs.go:123] Gathering logs for kube-apiserver [a0e97d1f2e98] ...
	I0814 09:50:37.980393    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e97d1f2e98"
	I0814 09:50:37.995245    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:50:37.995257    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:50:38.032395    4019 logs.go:123] Gathering logs for kube-apiserver [7e949e3a70a3] ...
	I0814 09:50:38.032409    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e949e3a70a3"
	I0814 09:50:38.060777    4019 logs.go:123] Gathering logs for etcd [1c40d2ec1695] ...
	I0814 09:50:38.060791    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c40d2ec1695"
	I0814 09:50:38.075645    4019 logs.go:123] Gathering logs for kube-scheduler [e69eb145d498] ...
	I0814 09:50:38.075655    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e69eb145d498"
	I0814 09:50:38.087509    4019 logs.go:123] Gathering logs for kube-controller-manager [4731b37cec82] ...
	I0814 09:50:38.087520    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4731b37cec82"
	I0814 09:50:38.105727    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:50:38.105742    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:50:40.644505    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:50:45.646484    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:50:45.646727    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:50:45.663149    4019 logs.go:276] 2 containers: [a0e97d1f2e98 7e949e3a70a3]
	I0814 09:50:45.663239    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:50:45.675130    4019 logs.go:276] 2 containers: [53f6a02861b1 1c40d2ec1695]
	I0814 09:50:45.675204    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:50:45.687151    4019 logs.go:276] 1 containers: [c1a47e5fa9ce]
	I0814 09:50:45.687227    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:50:45.698051    4019 logs.go:276] 2 containers: [e69eb145d498 9575eb1a63d7]
	I0814 09:50:45.698122    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:50:45.708425    4019 logs.go:276] 1 containers: [4289049e4ff9]
	I0814 09:50:45.708496    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:50:45.718800    4019 logs.go:276] 2 containers: [4731b37cec82 af86f8f14004]
	I0814 09:50:45.718866    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:50:45.728939    4019 logs.go:276] 0 containers: []
	W0814 09:50:45.728952    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:50:45.729012    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:50:45.739100    4019 logs.go:276] 2 containers: [d58abbf5afb4 3e7bc04c8fa4]
	I0814 09:50:45.739118    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:50:45.739123    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:50:45.751055    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:50:45.751067    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:50:45.789625    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:50:45.789633    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:50:45.793758    4019 logs.go:123] Gathering logs for kube-apiserver [7e949e3a70a3] ...
	I0814 09:50:45.793766    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e949e3a70a3"
	I0814 09:50:45.818339    4019 logs.go:123] Gathering logs for etcd [53f6a02861b1] ...
	I0814 09:50:45.818350    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53f6a02861b1"
	I0814 09:50:45.832233    4019 logs.go:123] Gathering logs for etcd [1c40d2ec1695] ...
	I0814 09:50:45.832243    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c40d2ec1695"
	I0814 09:50:45.846783    4019 logs.go:123] Gathering logs for storage-provisioner [3e7bc04c8fa4] ...
	I0814 09:50:45.846793    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e7bc04c8fa4"
	I0814 09:50:45.858443    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:50:45.858456    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:50:45.892606    4019 logs.go:123] Gathering logs for coredns [c1a47e5fa9ce] ...
	I0814 09:50:45.892616    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a47e5fa9ce"
	I0814 09:50:45.904372    4019 logs.go:123] Gathering logs for kube-scheduler [e69eb145d498] ...
	I0814 09:50:45.904385    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e69eb145d498"
	I0814 09:50:45.916440    4019 logs.go:123] Gathering logs for kube-proxy [4289049e4ff9] ...
	I0814 09:50:45.916452    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4289049e4ff9"
	I0814 09:50:45.928135    4019 logs.go:123] Gathering logs for kube-controller-manager [af86f8f14004] ...
	I0814 09:50:45.928145    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af86f8f14004"
	I0814 09:50:45.942338    4019 logs.go:123] Gathering logs for kube-apiserver [a0e97d1f2e98] ...
	I0814 09:50:45.942352    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e97d1f2e98"
	I0814 09:50:45.957172    4019 logs.go:123] Gathering logs for kube-scheduler [9575eb1a63d7] ...
	I0814 09:50:45.957183    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9575eb1a63d7"
	I0814 09:50:45.971902    4019 logs.go:123] Gathering logs for storage-provisioner [d58abbf5afb4] ...
	I0814 09:50:45.971913    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58abbf5afb4"
	I0814 09:50:45.982971    4019 logs.go:123] Gathering logs for kube-controller-manager [4731b37cec82] ...
	I0814 09:50:45.982982    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4731b37cec82"
	I0814 09:50:46.000280    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:50:46.000290    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:50:48.524383    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:50:53.526352    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:50:53.526561    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:50:53.550913    4019 logs.go:276] 2 containers: [a0e97d1f2e98 7e949e3a70a3]
	I0814 09:50:53.551015    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:50:53.566871    4019 logs.go:276] 2 containers: [53f6a02861b1 1c40d2ec1695]
	I0814 09:50:53.566953    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:50:53.579620    4019 logs.go:276] 1 containers: [c1a47e5fa9ce]
	I0814 09:50:53.579685    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:50:53.590857    4019 logs.go:276] 2 containers: [e69eb145d498 9575eb1a63d7]
	I0814 09:50:53.590923    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:50:53.601801    4019 logs.go:276] 1 containers: [4289049e4ff9]
	I0814 09:50:53.601881    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:50:53.616105    4019 logs.go:276] 2 containers: [4731b37cec82 af86f8f14004]
	I0814 09:50:53.616174    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:50:53.632881    4019 logs.go:276] 0 containers: []
	W0814 09:50:53.632893    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:50:53.632957    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:50:53.649021    4019 logs.go:276] 2 containers: [d58abbf5afb4 3e7bc04c8fa4]
	I0814 09:50:53.649037    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:50:53.649045    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:50:53.653253    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:50:53.653260    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:50:53.688165    4019 logs.go:123] Gathering logs for kube-apiserver [a0e97d1f2e98] ...
	I0814 09:50:53.688174    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e97d1f2e98"
	I0814 09:50:53.702646    4019 logs.go:123] Gathering logs for kube-scheduler [e69eb145d498] ...
	I0814 09:50:53.702658    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e69eb145d498"
	I0814 09:50:53.714115    4019 logs.go:123] Gathering logs for kube-proxy [4289049e4ff9] ...
	I0814 09:50:53.714128    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4289049e4ff9"
	I0814 09:50:53.725927    4019 logs.go:123] Gathering logs for storage-provisioner [3e7bc04c8fa4] ...
	I0814 09:50:53.725939    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e7bc04c8fa4"
	I0814 09:50:53.737420    4019 logs.go:123] Gathering logs for etcd [53f6a02861b1] ...
	I0814 09:50:53.737430    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53f6a02861b1"
	I0814 09:50:53.751519    4019 logs.go:123] Gathering logs for etcd [1c40d2ec1695] ...
	I0814 09:50:53.751532    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c40d2ec1695"
	I0814 09:50:53.766054    4019 logs.go:123] Gathering logs for coredns [c1a47e5fa9ce] ...
	I0814 09:50:53.766065    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a47e5fa9ce"
	I0814 09:50:53.777295    4019 logs.go:123] Gathering logs for storage-provisioner [d58abbf5afb4] ...
	I0814 09:50:53.777308    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58abbf5afb4"
	I0814 09:50:53.793123    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:50:53.793132    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:50:53.816147    4019 logs.go:123] Gathering logs for kube-apiserver [7e949e3a70a3] ...
	I0814 09:50:53.816154    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e949e3a70a3"
	I0814 09:50:53.845790    4019 logs.go:123] Gathering logs for kube-controller-manager [4731b37cec82] ...
	I0814 09:50:53.845800    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4731b37cec82"
	I0814 09:50:53.862603    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:50:53.862615    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:50:53.899451    4019 logs.go:123] Gathering logs for kube-scheduler [9575eb1a63d7] ...
	I0814 09:50:53.899461    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9575eb1a63d7"
	I0814 09:50:53.914461    4019 logs.go:123] Gathering logs for kube-controller-manager [af86f8f14004] ...
	I0814 09:50:53.914474    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af86f8f14004"
	I0814 09:50:53.928782    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:50:53.928795    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:50:56.440998    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:51:01.443002    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:51:01.443250    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:51:01.457552    4019 logs.go:276] 2 containers: [a0e97d1f2e98 7e949e3a70a3]
	I0814 09:51:01.457637    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:51:01.469708    4019 logs.go:276] 2 containers: [53f6a02861b1 1c40d2ec1695]
	I0814 09:51:01.469788    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:51:01.480794    4019 logs.go:276] 1 containers: [c1a47e5fa9ce]
	I0814 09:51:01.480864    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:51:01.491434    4019 logs.go:276] 2 containers: [e69eb145d498 9575eb1a63d7]
	I0814 09:51:01.491507    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:51:01.503882    4019 logs.go:276] 1 containers: [4289049e4ff9]
	I0814 09:51:01.503952    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:51:01.514699    4019 logs.go:276] 2 containers: [4731b37cec82 af86f8f14004]
	I0814 09:51:01.514768    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:51:01.524684    4019 logs.go:276] 0 containers: []
	W0814 09:51:01.524693    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:51:01.524751    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:51:01.535127    4019 logs.go:276] 2 containers: [d58abbf5afb4 3e7bc04c8fa4]
	I0814 09:51:01.535146    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:51:01.535151    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:51:01.577794    4019 logs.go:123] Gathering logs for kube-apiserver [7e949e3a70a3] ...
	I0814 09:51:01.577808    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e949e3a70a3"
	I0814 09:51:01.609980    4019 logs.go:123] Gathering logs for kube-scheduler [e69eb145d498] ...
	I0814 09:51:01.609994    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e69eb145d498"
	I0814 09:51:01.621844    4019 logs.go:123] Gathering logs for kube-scheduler [9575eb1a63d7] ...
	I0814 09:51:01.621854    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9575eb1a63d7"
	I0814 09:51:01.641019    4019 logs.go:123] Gathering logs for storage-provisioner [d58abbf5afb4] ...
	I0814 09:51:01.641029    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58abbf5afb4"
	I0814 09:51:01.654077    4019 logs.go:123] Gathering logs for storage-provisioner [3e7bc04c8fa4] ...
	I0814 09:51:01.654087    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e7bc04c8fa4"
	I0814 09:51:01.666196    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:51:01.666208    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:51:01.670904    4019 logs.go:123] Gathering logs for kube-proxy [4289049e4ff9] ...
	I0814 09:51:01.670911    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4289049e4ff9"
	I0814 09:51:01.684926    4019 logs.go:123] Gathering logs for kube-controller-manager [4731b37cec82] ...
	I0814 09:51:01.684937    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4731b37cec82"
	I0814 09:51:01.701880    4019 logs.go:123] Gathering logs for kube-controller-manager [af86f8f14004] ...
	I0814 09:51:01.701890    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af86f8f14004"
	I0814 09:51:01.716337    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:51:01.716348    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:51:01.729123    4019 logs.go:123] Gathering logs for etcd [1c40d2ec1695] ...
	I0814 09:51:01.729134    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c40d2ec1695"
	I0814 09:51:01.743714    4019 logs.go:123] Gathering logs for coredns [c1a47e5fa9ce] ...
	I0814 09:51:01.743724    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a47e5fa9ce"
	I0814 09:51:01.756087    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:51:01.756098    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:51:01.778696    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:51:01.778704    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:51:01.815937    4019 logs.go:123] Gathering logs for kube-apiserver [a0e97d1f2e98] ...
	I0814 09:51:01.815948    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e97d1f2e98"
	I0814 09:51:01.830448    4019 logs.go:123] Gathering logs for etcd [53f6a02861b1] ...
	I0814 09:51:01.830462    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53f6a02861b1"
	I0814 09:51:04.344874    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:51:09.345318    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:51:09.345542    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:51:09.368641    4019 logs.go:276] 2 containers: [a0e97d1f2e98 7e949e3a70a3]
	I0814 09:51:09.368746    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:51:09.383986    4019 logs.go:276] 2 containers: [53f6a02861b1 1c40d2ec1695]
	I0814 09:51:09.384062    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:51:09.401071    4019 logs.go:276] 1 containers: [c1a47e5fa9ce]
	I0814 09:51:09.401133    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:51:09.411486    4019 logs.go:276] 2 containers: [e69eb145d498 9575eb1a63d7]
	I0814 09:51:09.411557    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:51:09.421561    4019 logs.go:276] 1 containers: [4289049e4ff9]
	I0814 09:51:09.421630    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:51:09.432212    4019 logs.go:276] 2 containers: [4731b37cec82 af86f8f14004]
	I0814 09:51:09.432278    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:51:09.442072    4019 logs.go:276] 0 containers: []
	W0814 09:51:09.442082    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:51:09.442134    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:51:09.453034    4019 logs.go:276] 2 containers: [d58abbf5afb4 3e7bc04c8fa4]
	I0814 09:51:09.453051    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:51:09.453057    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:51:09.489301    4019 logs.go:123] Gathering logs for kube-apiserver [7e949e3a70a3] ...
	I0814 09:51:09.489312    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e949e3a70a3"
	I0814 09:51:09.514296    4019 logs.go:123] Gathering logs for etcd [1c40d2ec1695] ...
	I0814 09:51:09.514308    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c40d2ec1695"
	I0814 09:51:09.529159    4019 logs.go:123] Gathering logs for kube-scheduler [e69eb145d498] ...
	I0814 09:51:09.529169    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e69eb145d498"
	I0814 09:51:09.540677    4019 logs.go:123] Gathering logs for kube-controller-manager [af86f8f14004] ...
	I0814 09:51:09.540686    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af86f8f14004"
	I0814 09:51:09.554286    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:51:09.554296    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:51:09.558969    4019 logs.go:123] Gathering logs for kube-apiserver [a0e97d1f2e98] ...
	I0814 09:51:09.558976    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0e97d1f2e98"
	I0814 09:51:09.591916    4019 logs.go:123] Gathering logs for coredns [c1a47e5fa9ce] ...
	I0814 09:51:09.591929    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a47e5fa9ce"
	I0814 09:51:09.611033    4019 logs.go:123] Gathering logs for kube-proxy [4289049e4ff9] ...
	I0814 09:51:09.611046    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4289049e4ff9"
	I0814 09:51:09.626911    4019 logs.go:123] Gathering logs for storage-provisioner [d58abbf5afb4] ...
	I0814 09:51:09.626922    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d58abbf5afb4"
	I0814 09:51:09.642765    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:51:09.642776    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:51:09.655159    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:51:09.655170    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 09:51:09.694083    4019 logs.go:123] Gathering logs for etcd [53f6a02861b1] ...
	I0814 09:51:09.694095    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53f6a02861b1"
	I0814 09:51:09.708185    4019 logs.go:123] Gathering logs for kube-controller-manager [4731b37cec82] ...
	I0814 09:51:09.708196    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4731b37cec82"
	I0814 09:51:09.725871    4019 logs.go:123] Gathering logs for kube-scheduler [9575eb1a63d7] ...
	I0814 09:51:09.725881    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9575eb1a63d7"
	I0814 09:51:09.740312    4019 logs.go:123] Gathering logs for storage-provisioner [3e7bc04c8fa4] ...
	I0814 09:51:09.740322    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e7bc04c8fa4"
	I0814 09:51:09.752014    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:51:09.752027    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:51:12.275797    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:51:17.277790    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:51:17.277873    4019 kubeadm.go:597] duration metric: took 4m3.831372375s to restartPrimaryControlPlane
	W0814 09:51:17.277935    4019 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0814 09:51:17.277968    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0814 09:51:18.320255    4019 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.04231625s)
	I0814 09:51:18.320314    4019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 09:51:18.325372    4019 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 09:51:18.328230    4019 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 09:51:18.330864    4019 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 09:51:18.330869    4019 kubeadm.go:157] found existing configuration files:
	
	I0814 09:51:18.330893    4019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50269 /etc/kubernetes/admin.conf
	I0814 09:51:18.333669    4019 kubeadm.go:163] "https://control-plane.minikube.internal:50269" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50269 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 09:51:18.333697    4019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 09:51:18.336722    4019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50269 /etc/kubernetes/kubelet.conf
	I0814 09:51:18.339302    4019 kubeadm.go:163] "https://control-plane.minikube.internal:50269" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50269 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 09:51:18.339326    4019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 09:51:18.341993    4019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50269 /etc/kubernetes/controller-manager.conf
	I0814 09:51:18.344970    4019 kubeadm.go:163] "https://control-plane.minikube.internal:50269" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50269 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 09:51:18.344992    4019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 09:51:18.347657    4019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50269 /etc/kubernetes/scheduler.conf
	I0814 09:51:18.350231    4019 kubeadm.go:163] "https://control-plane.minikube.internal:50269" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50269 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 09:51:18.350257    4019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 09:51:18.353619    4019 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 09:51:18.372333    4019 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0814 09:51:18.372422    4019 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 09:51:18.429445    4019 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 09:51:18.429536    4019 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 09:51:18.429696    4019 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0814 09:51:18.479796    4019 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 09:51:18.483092    4019 out.go:204]   - Generating certificates and keys ...
	I0814 09:51:18.483186    4019 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 09:51:18.483278    4019 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 09:51:18.483362    4019 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 09:51:18.483396    4019 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 09:51:18.483428    4019 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 09:51:18.483455    4019 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 09:51:18.483490    4019 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 09:51:18.483519    4019 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 09:51:18.483553    4019 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 09:51:18.483586    4019 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 09:51:18.483616    4019 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 09:51:18.483659    4019 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 09:51:18.573409    4019 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 09:51:18.733343    4019 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 09:51:18.892816    4019 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 09:51:18.953320    4019 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 09:51:18.982385    4019 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 09:51:18.982785    4019 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 09:51:18.982850    4019 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 09:51:19.067085    4019 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 09:51:19.071291    4019 out.go:204]   - Booting up control plane ...
	I0814 09:51:19.071348    4019 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 09:51:19.071402    4019 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 09:51:19.071453    4019 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 09:51:19.071497    4019 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 09:51:19.071637    4019 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0814 09:51:24.076851    4019 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.005281 seconds
	I0814 09:51:24.077054    4019 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0814 09:51:24.081155    4019 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0814 09:51:24.591967    4019 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0814 09:51:24.592072    4019 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-996000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0814 09:51:25.096037    4019 kubeadm.go:310] [bootstrap-token] Using token: aevy0w.o3qfcbxlyi7dbsuv
	I0814 09:51:25.102432    4019 out.go:204]   - Configuring RBAC rules ...
	I0814 09:51:25.102493    4019 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0814 09:51:25.102537    4019 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0814 09:51:25.107374    4019 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0814 09:51:25.108337    4019 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0814 09:51:25.108998    4019 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0814 09:51:25.109833    4019 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0814 09:51:25.113023    4019 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0814 09:51:25.273377    4019 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0814 09:51:25.501564    4019 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0814 09:51:25.502136    4019 kubeadm.go:310] 
	I0814 09:51:25.502169    4019 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0814 09:51:25.502175    4019 kubeadm.go:310] 
	I0814 09:51:25.502219    4019 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0814 09:51:25.502223    4019 kubeadm.go:310] 
	I0814 09:51:25.502244    4019 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0814 09:51:25.502282    4019 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0814 09:51:25.502322    4019 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0814 09:51:25.502328    4019 kubeadm.go:310] 
	I0814 09:51:25.502358    4019 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0814 09:51:25.502376    4019 kubeadm.go:310] 
	I0814 09:51:25.502412    4019 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0814 09:51:25.502415    4019 kubeadm.go:310] 
	I0814 09:51:25.502457    4019 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0814 09:51:25.502506    4019 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0814 09:51:25.502555    4019 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0814 09:51:25.502559    4019 kubeadm.go:310] 
	I0814 09:51:25.502608    4019 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0814 09:51:25.502647    4019 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0814 09:51:25.502650    4019 kubeadm.go:310] 
	I0814 09:51:25.502696    4019 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token aevy0w.o3qfcbxlyi7dbsuv \
	I0814 09:51:25.502758    4019 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6bc1bdbbe167ab66a20d6bf1c306e986530a9d0fee84c418f91e1b4312d4e260 \
	I0814 09:51:25.502770    4019 kubeadm.go:310] 	--control-plane 
	I0814 09:51:25.502775    4019 kubeadm.go:310] 
	I0814 09:51:25.502863    4019 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0814 09:51:25.502866    4019 kubeadm.go:310] 
	I0814 09:51:25.502913    4019 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token aevy0w.o3qfcbxlyi7dbsuv \
	I0814 09:51:25.502969    4019 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6bc1bdbbe167ab66a20d6bf1c306e986530a9d0fee84c418f91e1b4312d4e260 
	I0814 09:51:25.503102    4019 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 09:51:25.503144    4019 cni.go:84] Creating CNI manager for ""
	I0814 09:51:25.503154    4019 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0814 09:51:25.509827    4019 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 09:51:25.513939    4019 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 09:51:25.517743    4019 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 09:51:25.522859    4019 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 09:51:25.522910    4019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 09:51:25.522911    4019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-996000 minikube.k8s.io/updated_at=2024_08_14T09_51_25_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35 minikube.k8s.io/name=stopped-upgrade-996000 minikube.k8s.io/primary=true
	I0814 09:51:25.565547    4019 kubeadm.go:1113] duration metric: took 42.675833ms to wait for elevateKubeSystemPrivileges
	I0814 09:51:25.565579    4019 ops.go:34] apiserver oom_adj: -16
	I0814 09:51:25.565585    4019 kubeadm.go:394] duration metric: took 4m12.1358215s to StartCluster
	I0814 09:51:25.565596    4019 settings.go:142] acquiring lock: {Name:mk45b0aba98bc9a80a7cc9e2d664f69dcf74de9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:51:25.565691    4019 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19446-1067/kubeconfig
	I0814 09:51:25.566084    4019 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19446-1067/kubeconfig: {Name:mkd5271b15535f495ab8e34d870e7dbcadc9c40a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:51:25.566278    4019 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0814 09:51:25.566297    4019 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0814 09:51:25.566337    4019 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-996000"
	I0814 09:51:25.566391    4019 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-996000"
	I0814 09:51:25.566390    4019 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-996000"
	W0814 09:51:25.566398    4019 addons.go:243] addon storage-provisioner should already be in state true
	I0814 09:51:25.566406    4019 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-996000"
	I0814 09:51:25.566410    4019 host.go:66] Checking if "stopped-upgrade-996000" exists ...
	I0814 09:51:25.566364    4019 config.go:182] Loaded profile config "stopped-upgrade-996000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0814 09:51:25.567336    4019 kapi.go:59] client config for stopped-upgrade-996000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/stopped-upgrade-996000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/stopped-upgrade-996000/client.key", CAFile:"/Users/jenkins/minikube-integration/19446-1067/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102907e30), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0814 09:51:25.567454    4019 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-996000"
	W0814 09:51:25.567459    4019 addons.go:243] addon default-storageclass should already be in state true
	I0814 09:51:25.567465    4019 host.go:66] Checking if "stopped-upgrade-996000" exists ...
	I0814 09:51:25.569902    4019 out.go:177] * Verifying Kubernetes components...
	I0814 09:51:25.570239    4019 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 09:51:25.573023    4019 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 09:51:25.573030    4019 sshutil.go:53] new ssh client: &{IP:localhost Port:50234 SSHKeyPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/stopped-upgrade-996000/id_rsa Username:docker}
	I0814 09:51:25.573882    4019 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 09:51:25.577929    4019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 09:51:25.581969    4019 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 09:51:25.581977    4019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 09:51:25.581985    4019 sshutil.go:53] new ssh client: &{IP:localhost Port:50234 SSHKeyPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/stopped-upgrade-996000/id_rsa Username:docker}
	I0814 09:51:25.649481    4019 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 09:51:25.655056    4019 api_server.go:52] waiting for apiserver process to appear ...
	I0814 09:51:25.655105    4019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 09:51:25.658948    4019 api_server.go:72] duration metric: took 92.663875ms to wait for apiserver process to appear ...
	I0814 09:51:25.658957    4019 api_server.go:88] waiting for apiserver healthz status ...
	I0814 09:51:25.658964    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:51:25.678131    4019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 09:51:25.724268    4019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 09:51:26.049147    4019 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0814 09:51:26.049160    4019 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0814 09:51:30.660852    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:51:30.660886    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:51:35.660947    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:51:35.660966    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:51:40.661047    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:51:40.661074    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:51:45.661210    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:51:45.661234    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:51:50.661488    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:51:50.661533    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:51:55.662000    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:51:55.662028    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0814 09:51:56.050199    4019 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0814 09:51:56.054392    4019 out.go:177] * Enabled addons: storage-provisioner
	I0814 09:51:56.065315    4019 addons.go:510] duration metric: took 30.50035375s for enable addons: enabled=[storage-provisioner]
	I0814 09:52:00.662646    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:52:00.662697    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:52:05.663598    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:52:05.663638    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:52:10.665086    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:52:10.665134    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:52:15.665929    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:52:15.665959    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:52:20.667748    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:52:20.667786    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:52:25.669857    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:52:25.670032    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:52:25.681449    4019 logs.go:276] 1 containers: [66f59011937b]
	I0814 09:52:25.681522    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:52:25.692672    4019 logs.go:276] 1 containers: [abfc7e49a585]
	I0814 09:52:25.692747    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:52:25.704012    4019 logs.go:276] 2 containers: [9c8867ac9a63 b48b5e6429a9]
	I0814 09:52:25.704079    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:52:25.715099    4019 logs.go:276] 1 containers: [e3a20e2e124f]
	I0814 09:52:25.715170    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:52:25.725810    4019 logs.go:276] 1 containers: [bfe0052f807c]
	I0814 09:52:25.725875    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:52:25.736909    4019 logs.go:276] 1 containers: [d4c01ab8fbcc]
	I0814 09:52:25.736968    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:52:25.748230    4019 logs.go:276] 0 containers: []
	W0814 09:52:25.748243    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:52:25.748306    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:52:25.759107    4019 logs.go:276] 1 containers: [8b4bf1194ad2]
	I0814 09:52:25.759120    4019 logs.go:123] Gathering logs for kube-proxy [bfe0052f807c] ...
	I0814 09:52:25.759127    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe0052f807c"
	I0814 09:52:25.771295    4019 logs.go:123] Gathering logs for storage-provisioner [8b4bf1194ad2] ...
	I0814 09:52:25.771306    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b4bf1194ad2"
	I0814 09:52:25.783774    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:52:25.783784    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:52:25.819782    4019 logs.go:123] Gathering logs for kube-apiserver [66f59011937b] ...
	I0814 09:52:25.819795    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f59011937b"
	I0814 09:52:25.836818    4019 logs.go:123] Gathering logs for etcd [abfc7e49a585] ...
	I0814 09:52:25.836829    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abfc7e49a585"
	I0814 09:52:25.852624    4019 logs.go:123] Gathering logs for coredns [9c8867ac9a63] ...
	I0814 09:52:25.852635    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c8867ac9a63"
	I0814 09:52:25.864866    4019 logs.go:123] Gathering logs for coredns [b48b5e6429a9] ...
	I0814 09:52:25.864882    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b48b5e6429a9"
	I0814 09:52:25.877589    4019 logs.go:123] Gathering logs for kube-scheduler [e3a20e2e124f] ...
	I0814 09:52:25.877601    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a20e2e124f"
	I0814 09:52:25.894162    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:52:25.894174    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0814 09:52:25.927717    4019 logs.go:138] Found kubelet problem: Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	W0814 09:52:25.927811    4019 logs.go:138] Found kubelet problem: Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	I0814 09:52:25.928946    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:52:25.928951    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:52:25.933163    4019 logs.go:123] Gathering logs for kube-controller-manager [d4c01ab8fbcc] ...
	I0814 09:52:25.933170    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4c01ab8fbcc"
	I0814 09:52:25.955958    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:52:25.955972    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:52:25.979483    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:52:25.979491    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:52:25.991753    4019 out.go:304] Setting ErrFile to fd 2...
	I0814 09:52:25.991763    4019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0814 09:52:25.991795    4019 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0814 09:52:25.991801    4019 out.go:239]   Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	  Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	W0814 09:52:25.991805    4019 out.go:239]   Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	  Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	I0814 09:52:25.991862    4019 out.go:304] Setting ErrFile to fd 2...
	I0814 09:52:25.991882    4019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:52:35.995572    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:52:40.997569    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:52:40.997735    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:52:41.009824    4019 logs.go:276] 1 containers: [66f59011937b]
	I0814 09:52:41.009901    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:52:41.020793    4019 logs.go:276] 1 containers: [abfc7e49a585]
	I0814 09:52:41.020863    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:52:41.031903    4019 logs.go:276] 2 containers: [9c8867ac9a63 b48b5e6429a9]
	I0814 09:52:41.031973    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:52:41.042400    4019 logs.go:276] 1 containers: [e3a20e2e124f]
	I0814 09:52:41.042466    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:52:41.052852    4019 logs.go:276] 1 containers: [bfe0052f807c]
	I0814 09:52:41.052919    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:52:41.063001    4019 logs.go:276] 1 containers: [d4c01ab8fbcc]
	I0814 09:52:41.063073    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:52:41.073095    4019 logs.go:276] 0 containers: []
	W0814 09:52:41.073105    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:52:41.073162    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:52:41.083766    4019 logs.go:276] 1 containers: [8b4bf1194ad2]
	I0814 09:52:41.083781    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:52:41.083786    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:52:41.108804    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:52:41.108813    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:52:41.120340    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:52:41.120351    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0814 09:52:41.155854    4019 logs.go:138] Found kubelet problem: Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	W0814 09:52:41.155948    4019 logs.go:138] Found kubelet problem: Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	I0814 09:52:41.157142    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:52:41.157149    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:52:41.161102    4019 logs.go:123] Gathering logs for kube-apiserver [66f59011937b] ...
	I0814 09:52:41.161108    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f59011937b"
	I0814 09:52:41.175291    4019 logs.go:123] Gathering logs for coredns [b48b5e6429a9] ...
	I0814 09:52:41.175300    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b48b5e6429a9"
	I0814 09:52:41.195589    4019 logs.go:123] Gathering logs for kube-scheduler [e3a20e2e124f] ...
	I0814 09:52:41.195599    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a20e2e124f"
	I0814 09:52:41.210230    4019 logs.go:123] Gathering logs for storage-provisioner [8b4bf1194ad2] ...
	I0814 09:52:41.210240    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b4bf1194ad2"
	I0814 09:52:41.221806    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:52:41.221818    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:52:41.261429    4019 logs.go:123] Gathering logs for etcd [abfc7e49a585] ...
	I0814 09:52:41.261443    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abfc7e49a585"
	I0814 09:52:41.276082    4019 logs.go:123] Gathering logs for coredns [9c8867ac9a63] ...
	I0814 09:52:41.276094    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c8867ac9a63"
	I0814 09:52:41.288234    4019 logs.go:123] Gathering logs for kube-proxy [bfe0052f807c] ...
	I0814 09:52:41.288244    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe0052f807c"
	I0814 09:52:41.303707    4019 logs.go:123] Gathering logs for kube-controller-manager [d4c01ab8fbcc] ...
	I0814 09:52:41.303719    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4c01ab8fbcc"
	I0814 09:52:41.321373    4019 out.go:304] Setting ErrFile to fd 2...
	I0814 09:52:41.321384    4019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0814 09:52:41.321413    4019 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0814 09:52:41.321417    4019 out.go:239]   Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	  Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	W0814 09:52:41.321421    4019 out.go:239]   Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	  Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	I0814 09:52:41.321432    4019 out.go:304] Setting ErrFile to fd 2...
	I0814 09:52:41.321443    4019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:52:51.325137    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:52:56.327125    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:52:56.327306    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:52:56.345676    4019 logs.go:276] 1 containers: [66f59011937b]
	I0814 09:52:56.345762    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:52:56.358588    4019 logs.go:276] 1 containers: [abfc7e49a585]
	I0814 09:52:56.358663    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:52:56.370225    4019 logs.go:276] 2 containers: [9c8867ac9a63 b48b5e6429a9]
	I0814 09:52:56.370289    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:52:56.381465    4019 logs.go:276] 1 containers: [e3a20e2e124f]
	I0814 09:52:56.381534    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:52:56.392881    4019 logs.go:276] 1 containers: [bfe0052f807c]
	I0814 09:52:56.392946    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:52:56.404772    4019 logs.go:276] 1 containers: [d4c01ab8fbcc]
	I0814 09:52:56.404841    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:52:56.414846    4019 logs.go:276] 0 containers: []
	W0814 09:52:56.414858    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:52:56.414918    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:52:56.425734    4019 logs.go:276] 1 containers: [8b4bf1194ad2]
	I0814 09:52:56.425750    4019 logs.go:123] Gathering logs for kube-apiserver [66f59011937b] ...
	I0814 09:52:56.425755    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f59011937b"
	I0814 09:52:56.440049    4019 logs.go:123] Gathering logs for etcd [abfc7e49a585] ...
	I0814 09:52:56.440060    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abfc7e49a585"
	I0814 09:52:56.454717    4019 logs.go:123] Gathering logs for coredns [9c8867ac9a63] ...
	I0814 09:52:56.454728    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c8867ac9a63"
	I0814 09:52:56.465997    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:52:56.466008    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:52:56.490586    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:52:56.490597    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:52:56.495193    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:52:56.495199    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:52:56.530193    4019 logs.go:123] Gathering logs for kube-scheduler [e3a20e2e124f] ...
	I0814 09:52:56.530203    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a20e2e124f"
	I0814 09:52:56.544896    4019 logs.go:123] Gathering logs for kube-proxy [bfe0052f807c] ...
	I0814 09:52:56.544908    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe0052f807c"
	I0814 09:52:56.556788    4019 logs.go:123] Gathering logs for kube-controller-manager [d4c01ab8fbcc] ...
	I0814 09:52:56.556799    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4c01ab8fbcc"
	I0814 09:52:56.574588    4019 logs.go:123] Gathering logs for storage-provisioner [8b4bf1194ad2] ...
	I0814 09:52:56.574601    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b4bf1194ad2"
	I0814 09:52:56.585959    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:52:56.585972    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:52:56.598240    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:52:56.598251    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0814 09:52:56.632802    4019 logs.go:138] Found kubelet problem: Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	W0814 09:52:56.632898    4019 logs.go:138] Found kubelet problem: Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	I0814 09:52:56.634042    4019 logs.go:123] Gathering logs for coredns [b48b5e6429a9] ...
	I0814 09:52:56.634046    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b48b5e6429a9"
	I0814 09:52:56.646081    4019 out.go:304] Setting ErrFile to fd 2...
	I0814 09:52:56.646093    4019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0814 09:52:56.646120    4019 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0814 09:52:56.646125    4019 out.go:239]   Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	  Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	W0814 09:52:56.646129    4019 out.go:239]   Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	  Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	I0814 09:52:56.646133    4019 out.go:304] Setting ErrFile to fd 2...
	I0814 09:52:56.646136    4019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:53:06.649858    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:53:11.651965    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:53:11.652109    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:53:11.666059    4019 logs.go:276] 1 containers: [66f59011937b]
	I0814 09:53:11.666144    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:53:11.681888    4019 logs.go:276] 1 containers: [abfc7e49a585]
	I0814 09:53:11.681959    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:53:11.697783    4019 logs.go:276] 2 containers: [9c8867ac9a63 b48b5e6429a9]
	I0814 09:53:11.697854    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:53:11.708595    4019 logs.go:276] 1 containers: [e3a20e2e124f]
	I0814 09:53:11.708665    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:53:11.719307    4019 logs.go:276] 1 containers: [bfe0052f807c]
	I0814 09:53:11.719374    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:53:11.729974    4019 logs.go:276] 1 containers: [d4c01ab8fbcc]
	I0814 09:53:11.730032    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:53:11.740342    4019 logs.go:276] 0 containers: []
	W0814 09:53:11.740355    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:53:11.740415    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:53:11.751226    4019 logs.go:276] 1 containers: [8b4bf1194ad2]
	I0814 09:53:11.751239    4019 logs.go:123] Gathering logs for coredns [9c8867ac9a63] ...
	I0814 09:53:11.751244    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c8867ac9a63"
	I0814 09:53:11.763337    4019 logs.go:123] Gathering logs for coredns [b48b5e6429a9] ...
	I0814 09:53:11.763348    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b48b5e6429a9"
	I0814 09:53:11.775587    4019 logs.go:123] Gathering logs for kube-proxy [bfe0052f807c] ...
	I0814 09:53:11.775601    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe0052f807c"
	I0814 09:53:11.787153    4019 logs.go:123] Gathering logs for kube-controller-manager [d4c01ab8fbcc] ...
	I0814 09:53:11.787170    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4c01ab8fbcc"
	I0814 09:53:11.805522    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:53:11.805539    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0814 09:53:11.839723    4019 logs.go:138] Found kubelet problem: Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	W0814 09:53:11.839819    4019 logs.go:138] Found kubelet problem: Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	I0814 09:53:11.841039    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:53:11.841046    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:53:11.847862    4019 logs.go:123] Gathering logs for kube-apiserver [66f59011937b] ...
	I0814 09:53:11.847873    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f59011937b"
	I0814 09:53:11.863555    4019 logs.go:123] Gathering logs for etcd [abfc7e49a585] ...
	I0814 09:53:11.863565    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abfc7e49a585"
	I0814 09:53:11.877644    4019 logs.go:123] Gathering logs for storage-provisioner [8b4bf1194ad2] ...
	I0814 09:53:11.877654    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b4bf1194ad2"
	I0814 09:53:11.890305    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:53:11.890316    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:53:11.926238    4019 logs.go:123] Gathering logs for kube-scheduler [e3a20e2e124f] ...
	I0814 09:53:11.926248    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a20e2e124f"
	I0814 09:53:11.941322    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:53:11.941330    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:53:11.965999    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:53:11.966009    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:53:11.978010    4019 out.go:304] Setting ErrFile to fd 2...
	I0814 09:53:11.978020    4019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0814 09:53:11.978044    4019 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0814 09:53:11.978049    4019 out.go:239]   Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	  Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	W0814 09:53:11.978054    4019 out.go:239]   Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	  Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	I0814 09:53:11.978072    4019 out.go:304] Setting ErrFile to fd 2...
	I0814 09:53:11.978076    4019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:53:21.980749    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:53:26.981022    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:53:26.981114    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:53:26.993196    4019 logs.go:276] 1 containers: [66f59011937b]
	I0814 09:53:26.993270    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:53:27.004252    4019 logs.go:276] 1 containers: [abfc7e49a585]
	I0814 09:53:27.004327    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:53:27.015106    4019 logs.go:276] 3 containers: [f1f20b457441 9c8867ac9a63 b48b5e6429a9]
	I0814 09:53:27.015183    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:53:27.026263    4019 logs.go:276] 1 containers: [e3a20e2e124f]
	I0814 09:53:27.026331    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:53:27.036860    4019 logs.go:276] 1 containers: [bfe0052f807c]
	I0814 09:53:27.036929    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:53:27.047395    4019 logs.go:276] 1 containers: [d4c01ab8fbcc]
	I0814 09:53:27.047463    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:53:27.057670    4019 logs.go:276] 0 containers: []
	W0814 09:53:27.057680    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:53:27.057736    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:53:27.067982    4019 logs.go:276] 1 containers: [8b4bf1194ad2]
	I0814 09:53:27.068003    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:53:27.068008    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0814 09:53:27.099696    4019 logs.go:138] Found kubelet problem: Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	W0814 09:53:27.099791    4019 logs.go:138] Found kubelet problem: Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	I0814 09:53:27.101006    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:53:27.101011    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:53:27.136306    4019 logs.go:123] Gathering logs for coredns [f1f20b457441] ...
	I0814 09:53:27.136316    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1f20b457441"
	I0814 09:53:27.147969    4019 logs.go:123] Gathering logs for coredns [9c8867ac9a63] ...
	I0814 09:53:27.147981    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c8867ac9a63"
	I0814 09:53:27.160280    4019 logs.go:123] Gathering logs for kube-proxy [bfe0052f807c] ...
	I0814 09:53:27.160290    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe0052f807c"
	I0814 09:53:27.171858    4019 logs.go:123] Gathering logs for kube-controller-manager [d4c01ab8fbcc] ...
	I0814 09:53:27.171868    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4c01ab8fbcc"
	I0814 09:53:27.203016    4019 logs.go:123] Gathering logs for storage-provisioner [8b4bf1194ad2] ...
	I0814 09:53:27.203030    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b4bf1194ad2"
	I0814 09:53:27.226185    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:53:27.226197    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:53:27.246495    4019 logs.go:123] Gathering logs for etcd [abfc7e49a585] ...
	I0814 09:53:27.246505    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abfc7e49a585"
	I0814 09:53:27.267111    4019 logs.go:123] Gathering logs for coredns [b48b5e6429a9] ...
	I0814 09:53:27.267137    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b48b5e6429a9"
	I0814 09:53:27.290138    4019 logs.go:123] Gathering logs for kube-scheduler [e3a20e2e124f] ...
	I0814 09:53:27.290151    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a20e2e124f"
	I0814 09:53:27.313992    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:53:27.314006    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:53:27.319661    4019 logs.go:123] Gathering logs for kube-apiserver [66f59011937b] ...
	I0814 09:53:27.319670    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f59011937b"
	I0814 09:53:27.340043    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:53:27.340055    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:53:27.364226    4019 out.go:304] Setting ErrFile to fd 2...
	I0814 09:53:27.364239    4019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0814 09:53:27.364278    4019 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0814 09:53:27.364284    4019 out.go:239]   Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	  Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	W0814 09:53:27.364288    4019 out.go:239]   Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	  Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	I0814 09:53:27.364292    4019 out.go:304] Setting ErrFile to fd 2...
	I0814 09:53:27.364294    4019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:53:37.367964    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:53:42.369970    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:53:42.370139    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:53:42.382451    4019 logs.go:276] 1 containers: [66f59011937b]
	I0814 09:53:42.382528    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:53:42.395005    4019 logs.go:276] 1 containers: [abfc7e49a585]
	I0814 09:53:42.395088    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:53:42.408315    4019 logs.go:276] 4 containers: [f998ec6c5355 f1f20b457441 9c8867ac9a63 b48b5e6429a9]
	I0814 09:53:42.408389    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:53:42.421118    4019 logs.go:276] 1 containers: [e3a20e2e124f]
	I0814 09:53:42.421184    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:53:42.432538    4019 logs.go:276] 1 containers: [bfe0052f807c]
	I0814 09:53:42.432609    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:53:42.443500    4019 logs.go:276] 1 containers: [d4c01ab8fbcc]
	I0814 09:53:42.443574    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:53:42.454803    4019 logs.go:276] 0 containers: []
	W0814 09:53:42.454818    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:53:42.454876    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:53:42.465842    4019 logs.go:276] 1 containers: [8b4bf1194ad2]
	I0814 09:53:42.465861    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:53:42.465867    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:53:42.501998    4019 logs.go:123] Gathering logs for kube-controller-manager [d4c01ab8fbcc] ...
	I0814 09:53:42.502010    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4c01ab8fbcc"
	I0814 09:53:42.520991    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:53:42.521001    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0814 09:53:42.553258    4019 logs.go:138] Found kubelet problem: Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	W0814 09:53:42.553353    4019 logs.go:138] Found kubelet problem: Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	I0814 09:53:42.554488    4019 logs.go:123] Gathering logs for etcd [abfc7e49a585] ...
	I0814 09:53:42.554492    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abfc7e49a585"
	I0814 09:53:42.568660    4019 logs.go:123] Gathering logs for coredns [f998ec6c5355] ...
	I0814 09:53:42.568673    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f998ec6c5355"
	I0814 09:53:42.580453    4019 logs.go:123] Gathering logs for coredns [9c8867ac9a63] ...
	I0814 09:53:42.580465    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c8867ac9a63"
	I0814 09:53:42.593214    4019 logs.go:123] Gathering logs for kube-proxy [bfe0052f807c] ...
	I0814 09:53:42.593229    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe0052f807c"
	I0814 09:53:42.605531    4019 logs.go:123] Gathering logs for storage-provisioner [8b4bf1194ad2] ...
	I0814 09:53:42.605541    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b4bf1194ad2"
	I0814 09:53:42.617928    4019 logs.go:123] Gathering logs for coredns [f1f20b457441] ...
	I0814 09:53:42.617937    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1f20b457441"
	I0814 09:53:42.634058    4019 logs.go:123] Gathering logs for coredns [b48b5e6429a9] ...
	I0814 09:53:42.634067    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b48b5e6429a9"
	I0814 09:53:42.649274    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:53:42.649283    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:53:42.653484    4019 logs.go:123] Gathering logs for kube-apiserver [66f59011937b] ...
	I0814 09:53:42.653490    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f59011937b"
	I0814 09:53:42.667845    4019 logs.go:123] Gathering logs for kube-scheduler [e3a20e2e124f] ...
	I0814 09:53:42.667854    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a20e2e124f"
	I0814 09:53:42.683801    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:53:42.683812    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:53:42.707985    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:53:42.707992    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:53:42.720177    4019 out.go:304] Setting ErrFile to fd 2...
	I0814 09:53:42.720187    4019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0814 09:53:42.720213    4019 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0814 09:53:42.720222    4019 out.go:239]   Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	  Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	W0814 09:53:42.720226    4019 out.go:239]   Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	  Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	I0814 09:53:42.720230    4019 out.go:304] Setting ErrFile to fd 2...
	I0814 09:53:42.720233    4019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:53:52.723969    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:53:57.725992    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:53:57.726093    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:53:57.739061    4019 logs.go:276] 1 containers: [66f59011937b]
	I0814 09:53:57.739138    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:53:57.750495    4019 logs.go:276] 1 containers: [abfc7e49a585]
	I0814 09:53:57.750564    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:53:57.766267    4019 logs.go:276] 4 containers: [f998ec6c5355 f1f20b457441 9c8867ac9a63 b48b5e6429a9]
	I0814 09:53:57.766343    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:53:57.777643    4019 logs.go:276] 1 containers: [e3a20e2e124f]
	I0814 09:53:57.777710    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:53:57.789146    4019 logs.go:276] 1 containers: [bfe0052f807c]
	I0814 09:53:57.789218    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:53:57.800464    4019 logs.go:276] 1 containers: [d4c01ab8fbcc]
	I0814 09:53:57.800535    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:53:57.810756    4019 logs.go:276] 0 containers: []
	W0814 09:53:57.810766    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:53:57.810821    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:53:57.821987    4019 logs.go:276] 1 containers: [8b4bf1194ad2]
	I0814 09:53:57.822004    4019 logs.go:123] Gathering logs for coredns [f998ec6c5355] ...
	I0814 09:53:57.822013    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f998ec6c5355"
	I0814 09:53:57.835697    4019 logs.go:123] Gathering logs for kube-scheduler [e3a20e2e124f] ...
	I0814 09:53:57.835708    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a20e2e124f"
	I0814 09:53:57.851925    4019 logs.go:123] Gathering logs for storage-provisioner [8b4bf1194ad2] ...
	I0814 09:53:57.851940    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b4bf1194ad2"
	I0814 09:53:57.864430    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:53:57.864441    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:53:57.876896    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:53:57.876906    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:53:57.881744    4019 logs.go:123] Gathering logs for coredns [f1f20b457441] ...
	I0814 09:53:57.881751    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1f20b457441"
	I0814 09:53:57.893921    4019 logs.go:123] Gathering logs for coredns [9c8867ac9a63] ...
	I0814 09:53:57.893932    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c8867ac9a63"
	I0814 09:53:57.906469    4019 logs.go:123] Gathering logs for kube-controller-manager [d4c01ab8fbcc] ...
	I0814 09:53:57.906484    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4c01ab8fbcc"
	I0814 09:53:57.924437    4019 logs.go:123] Gathering logs for etcd [abfc7e49a585] ...
	I0814 09:53:57.924446    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abfc7e49a585"
	I0814 09:53:57.938657    4019 logs.go:123] Gathering logs for kube-apiserver [66f59011937b] ...
	I0814 09:53:57.938669    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f59011937b"
	I0814 09:53:57.953811    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:53:57.953822    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0814 09:53:57.987985    4019 logs.go:138] Found kubelet problem: Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	W0814 09:53:57.988080    4019 logs.go:138] Found kubelet problem: Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	I0814 09:53:57.989268    4019 logs.go:123] Gathering logs for coredns [b48b5e6429a9] ...
	I0814 09:53:57.989279    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b48b5e6429a9"
	I0814 09:53:58.002697    4019 logs.go:123] Gathering logs for kube-proxy [bfe0052f807c] ...
	I0814 09:53:58.002707    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe0052f807c"
	I0814 09:53:58.015181    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:53:58.015194    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:53:58.040514    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:53:58.040525    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:53:58.076138    4019 out.go:304] Setting ErrFile to fd 2...
	I0814 09:53:58.076149    4019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0814 09:53:58.076174    4019 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0814 09:53:58.076181    4019 out.go:239]   Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	  Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	W0814 09:53:58.076185    4019 out.go:239]   Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	  Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	I0814 09:53:58.076190    4019 out.go:304] Setting ErrFile to fd 2...
	I0814 09:53:58.076192    4019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:54:08.079911    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:54:13.081772    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:54:13.081948    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:54:13.102355    4019 logs.go:276] 1 containers: [66f59011937b]
	I0814 09:54:13.102479    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:54:13.120721    4019 logs.go:276] 1 containers: [abfc7e49a585]
	I0814 09:54:13.120791    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:54:13.132891    4019 logs.go:276] 4 containers: [f998ec6c5355 f1f20b457441 9c8867ac9a63 b48b5e6429a9]
	I0814 09:54:13.132959    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:54:13.143132    4019 logs.go:276] 1 containers: [e3a20e2e124f]
	I0814 09:54:13.143204    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:54:13.156228    4019 logs.go:276] 1 containers: [bfe0052f807c]
	I0814 09:54:13.156292    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:54:13.166941    4019 logs.go:276] 1 containers: [d4c01ab8fbcc]
	I0814 09:54:13.167005    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:54:13.182778    4019 logs.go:276] 0 containers: []
	W0814 09:54:13.182791    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:54:13.182843    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:54:13.193268    4019 logs.go:276] 1 containers: [8b4bf1194ad2]
	I0814 09:54:13.193288    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:54:13.193295    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:54:13.238917    4019 logs.go:123] Gathering logs for coredns [f1f20b457441] ...
	I0814 09:54:13.238927    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1f20b457441"
	I0814 09:54:13.254174    4019 logs.go:123] Gathering logs for kube-scheduler [e3a20e2e124f] ...
	I0814 09:54:13.254186    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a20e2e124f"
	I0814 09:54:13.269386    4019 logs.go:123] Gathering logs for kube-controller-manager [d4c01ab8fbcc] ...
	I0814 09:54:13.269397    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4c01ab8fbcc"
	I0814 09:54:13.287076    4019 logs.go:123] Gathering logs for kube-apiserver [66f59011937b] ...
	I0814 09:54:13.287089    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f59011937b"
	I0814 09:54:13.311332    4019 logs.go:123] Gathering logs for coredns [f998ec6c5355] ...
	I0814 09:54:13.311345    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f998ec6c5355"
	I0814 09:54:13.325520    4019 logs.go:123] Gathering logs for coredns [b48b5e6429a9] ...
	I0814 09:54:13.325531    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b48b5e6429a9"
	I0814 09:54:13.337534    4019 logs.go:123] Gathering logs for storage-provisioner [8b4bf1194ad2] ...
	I0814 09:54:13.337548    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b4bf1194ad2"
	I0814 09:54:13.349221    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:54:13.349236    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:54:13.375550    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:54:13.375563    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0814 09:54:13.411420    4019 logs.go:138] Found kubelet problem: Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	W0814 09:54:13.411516    4019 logs.go:138] Found kubelet problem: Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	I0814 09:54:13.412694    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:54:13.412699    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:54:13.417564    4019 logs.go:123] Gathering logs for kube-proxy [bfe0052f807c] ...
	I0814 09:54:13.417573    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe0052f807c"
	I0814 09:54:13.430165    4019 logs.go:123] Gathering logs for etcd [abfc7e49a585] ...
	I0814 09:54:13.430179    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abfc7e49a585"
	I0814 09:54:13.446253    4019 logs.go:123] Gathering logs for coredns [9c8867ac9a63] ...
	I0814 09:54:13.446267    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c8867ac9a63"
	I0814 09:54:13.458500    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:54:13.458515    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:54:13.469938    4019 out.go:304] Setting ErrFile to fd 2...
	I0814 09:54:13.469948    4019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0814 09:54:13.469975    4019 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0814 09:54:13.469981    4019 out.go:239]   Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	  Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	W0814 09:54:13.469984    4019 out.go:239]   Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	  Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	I0814 09:54:13.470000    4019 out.go:304] Setting ErrFile to fd 2...
	I0814 09:54:13.470002    4019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:54:23.473649    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:54:28.475624    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:54:28.475705    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:54:28.486908    4019 logs.go:276] 1 containers: [66f59011937b]
	I0814 09:54:28.486973    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:54:28.497766    4019 logs.go:276] 1 containers: [abfc7e49a585]
	I0814 09:54:28.497840    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:54:28.508184    4019 logs.go:276] 4 containers: [f998ec6c5355 f1f20b457441 9c8867ac9a63 b48b5e6429a9]
	I0814 09:54:28.508248    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:54:28.520686    4019 logs.go:276] 1 containers: [e3a20e2e124f]
	I0814 09:54:28.520746    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:54:28.531028    4019 logs.go:276] 1 containers: [bfe0052f807c]
	I0814 09:54:28.531097    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:54:28.541734    4019 logs.go:276] 1 containers: [d4c01ab8fbcc]
	I0814 09:54:28.541804    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:54:28.551860    4019 logs.go:276] 0 containers: []
	W0814 09:54:28.551870    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:54:28.551932    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:54:28.562258    4019 logs.go:276] 1 containers: [8b4bf1194ad2]
	I0814 09:54:28.562274    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:54:28.562280    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0814 09:54:28.594919    4019 logs.go:138] Found kubelet problem: Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	W0814 09:54:28.595012    4019 logs.go:138] Found kubelet problem: Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	I0814 09:54:28.596220    4019 logs.go:123] Gathering logs for coredns [f1f20b457441] ...
	I0814 09:54:28.596225    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1f20b457441"
	I0814 09:54:28.607885    4019 logs.go:123] Gathering logs for coredns [9c8867ac9a63] ...
	I0814 09:54:28.607896    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c8867ac9a63"
	I0814 09:54:28.619833    4019 logs.go:123] Gathering logs for coredns [b48b5e6429a9] ...
	I0814 09:54:28.619842    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b48b5e6429a9"
	I0814 09:54:28.631151    4019 logs.go:123] Gathering logs for kube-proxy [bfe0052f807c] ...
	I0814 09:54:28.631166    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe0052f807c"
	I0814 09:54:28.643308    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:54:28.643323    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:54:28.648121    4019 logs.go:123] Gathering logs for etcd [abfc7e49a585] ...
	I0814 09:54:28.648130    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abfc7e49a585"
	I0814 09:54:28.661955    4019 logs.go:123] Gathering logs for kube-controller-manager [d4c01ab8fbcc] ...
	I0814 09:54:28.661965    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4c01ab8fbcc"
	I0814 09:54:28.680609    4019 logs.go:123] Gathering logs for storage-provisioner [8b4bf1194ad2] ...
	I0814 09:54:28.680626    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b4bf1194ad2"
	I0814 09:54:28.692509    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:54:28.692519    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:54:28.726353    4019 logs.go:123] Gathering logs for coredns [f998ec6c5355] ...
	I0814 09:54:28.726370    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f998ec6c5355"
	I0814 09:54:28.742834    4019 logs.go:123] Gathering logs for kube-scheduler [e3a20e2e124f] ...
	I0814 09:54:28.742852    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a20e2e124f"
	I0814 09:54:28.758038    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:54:28.758047    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:54:28.781557    4019 logs.go:123] Gathering logs for kube-apiserver [66f59011937b] ...
	I0814 09:54:28.781566    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f59011937b"
	I0814 09:54:28.802176    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:54:28.802185    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:54:28.814018    4019 out.go:304] Setting ErrFile to fd 2...
	I0814 09:54:28.814027    4019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0814 09:54:28.814054    4019 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0814 09:54:28.814059    4019 out.go:239]   Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	  Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	W0814 09:54:28.814063    4019 out.go:239]   Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	  Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	I0814 09:54:28.814069    4019 out.go:304] Setting ErrFile to fd 2...
	I0814 09:54:28.814080    4019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:54:38.816138    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:54:43.818219    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:54:43.818406    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:54:43.834441    4019 logs.go:276] 1 containers: [66f59011937b]
	I0814 09:54:43.834522    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:54:43.847859    4019 logs.go:276] 1 containers: [abfc7e49a585]
	I0814 09:54:43.847932    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:54:43.860251    4019 logs.go:276] 4 containers: [f998ec6c5355 f1f20b457441 9c8867ac9a63 b48b5e6429a9]
	I0814 09:54:43.860326    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:54:43.871264    4019 logs.go:276] 1 containers: [e3a20e2e124f]
	I0814 09:54:43.871333    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:54:43.882176    4019 logs.go:276] 1 containers: [bfe0052f807c]
	I0814 09:54:43.882250    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:54:43.893277    4019 logs.go:276] 1 containers: [d4c01ab8fbcc]
	I0814 09:54:43.893344    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:54:43.903983    4019 logs.go:276] 0 containers: []
	W0814 09:54:43.903992    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:54:43.904047    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:54:43.914374    4019 logs.go:276] 1 containers: [8b4bf1194ad2]
	I0814 09:54:43.914393    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:54:43.914399    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:54:43.918782    4019 logs.go:123] Gathering logs for kube-apiserver [66f59011937b] ...
	I0814 09:54:43.918789    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f59011937b"
	I0814 09:54:43.937592    4019 logs.go:123] Gathering logs for coredns [f1f20b457441] ...
	I0814 09:54:43.937605    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1f20b457441"
	I0814 09:54:43.948786    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:54:43.948797    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:54:43.985066    4019 logs.go:123] Gathering logs for coredns [9c8867ac9a63] ...
	I0814 09:54:43.985080    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c8867ac9a63"
	I0814 09:54:43.997342    4019 logs.go:123] Gathering logs for coredns [b48b5e6429a9] ...
	I0814 09:54:43.997353    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b48b5e6429a9"
	I0814 09:54:44.008681    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:54:44.008694    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:54:44.034156    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:54:44.034168    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0814 09:54:44.066969    4019 logs.go:138] Found kubelet problem: Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	W0814 09:54:44.067062    4019 logs.go:138] Found kubelet problem: Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	I0814 09:54:44.068237    4019 logs.go:123] Gathering logs for etcd [abfc7e49a585] ...
	I0814 09:54:44.068243    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abfc7e49a585"
	I0814 09:54:44.082869    4019 logs.go:123] Gathering logs for coredns [f998ec6c5355] ...
	I0814 09:54:44.082880    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f998ec6c5355"
	I0814 09:54:44.094363    4019 logs.go:123] Gathering logs for kube-scheduler [e3a20e2e124f] ...
	I0814 09:54:44.094372    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a20e2e124f"
	I0814 09:54:44.109307    4019 logs.go:123] Gathering logs for kube-proxy [bfe0052f807c] ...
	I0814 09:54:44.109319    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe0052f807c"
	I0814 09:54:44.121235    4019 logs.go:123] Gathering logs for kube-controller-manager [d4c01ab8fbcc] ...
	I0814 09:54:44.121247    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4c01ab8fbcc"
	I0814 09:54:44.138538    4019 logs.go:123] Gathering logs for storage-provisioner [8b4bf1194ad2] ...
	I0814 09:54:44.138548    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b4bf1194ad2"
	I0814 09:54:44.150198    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:54:44.150208    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:54:44.161517    4019 out.go:304] Setting ErrFile to fd 2...
	I0814 09:54:44.161526    4019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0814 09:54:44.161555    4019 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0814 09:54:44.161559    4019 out.go:239]   Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	  Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	W0814 09:54:44.161563    4019 out.go:239]   Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	  Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	I0814 09:54:44.161568    4019 out.go:304] Setting ErrFile to fd 2...
	I0814 09:54:44.161571    4019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:54:54.165231    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:54:59.167290    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:54:59.167432    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:54:59.182437    4019 logs.go:276] 1 containers: [66f59011937b]
	I0814 09:54:59.182535    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:54:59.193770    4019 logs.go:276] 1 containers: [abfc7e49a585]
	I0814 09:54:59.193842    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:54:59.204933    4019 logs.go:276] 4 containers: [f998ec6c5355 f1f20b457441 9c8867ac9a63 b48b5e6429a9]
	I0814 09:54:59.205003    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:54:59.215343    4019 logs.go:276] 1 containers: [e3a20e2e124f]
	I0814 09:54:59.215402    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:54:59.226992    4019 logs.go:276] 1 containers: [bfe0052f807c]
	I0814 09:54:59.227060    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:54:59.237650    4019 logs.go:276] 1 containers: [d4c01ab8fbcc]
	I0814 09:54:59.237724    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:54:59.247732    4019 logs.go:276] 0 containers: []
	W0814 09:54:59.247745    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:54:59.247796    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:54:59.258638    4019 logs.go:276] 1 containers: [8b4bf1194ad2]
	I0814 09:54:59.258651    4019 logs.go:123] Gathering logs for kube-apiserver [66f59011937b] ...
	I0814 09:54:59.258657    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f59011937b"
	I0814 09:54:59.272852    4019 logs.go:123] Gathering logs for kube-proxy [bfe0052f807c] ...
	I0814 09:54:59.272862    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe0052f807c"
	I0814 09:54:59.284515    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:54:59.284524    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0814 09:54:59.317960    4019 logs.go:138] Found kubelet problem: Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	W0814 09:54:59.318059    4019 logs.go:138] Found kubelet problem: Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	I0814 09:54:59.319275    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:54:59.319283    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:54:59.354843    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:54:59.354854    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:54:59.366788    4019 logs.go:123] Gathering logs for coredns [b48b5e6429a9] ...
	I0814 09:54:59.366798    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b48b5e6429a9"
	I0814 09:54:59.379021    4019 logs.go:123] Gathering logs for kube-scheduler [e3a20e2e124f] ...
	I0814 09:54:59.379031    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a20e2e124f"
	I0814 09:54:59.394706    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:54:59.394718    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:54:59.420046    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:54:59.420054    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:54:59.424401    4019 logs.go:123] Gathering logs for coredns [f998ec6c5355] ...
	I0814 09:54:59.424409    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f998ec6c5355"
	I0814 09:54:59.437431    4019 logs.go:123] Gathering logs for coredns [9c8867ac9a63] ...
	I0814 09:54:59.437442    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c8867ac9a63"
	I0814 09:54:59.449813    4019 logs.go:123] Gathering logs for kube-controller-manager [d4c01ab8fbcc] ...
	I0814 09:54:59.449825    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4c01ab8fbcc"
	I0814 09:54:59.467277    4019 logs.go:123] Gathering logs for storage-provisioner [8b4bf1194ad2] ...
	I0814 09:54:59.467287    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b4bf1194ad2"
	I0814 09:54:59.478887    4019 logs.go:123] Gathering logs for etcd [abfc7e49a585] ...
	I0814 09:54:59.478897    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abfc7e49a585"
	I0814 09:54:59.493316    4019 logs.go:123] Gathering logs for coredns [f1f20b457441] ...
	I0814 09:54:59.493329    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1f20b457441"
	I0814 09:54:59.504677    4019 out.go:304] Setting ErrFile to fd 2...
	I0814 09:54:59.504687    4019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0814 09:54:59.504712    4019 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0814 09:54:59.504717    4019 out.go:239]   Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	  Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	W0814 09:54:59.504720    4019 out.go:239]   Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	  Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	I0814 09:54:59.504724    4019 out.go:304] Setting ErrFile to fd 2...
	I0814 09:54:59.504726    4019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:55:09.508456    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:55:14.510536    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:55:14.510748    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0814 09:55:14.541219    4019 logs.go:276] 1 containers: [66f59011937b]
	I0814 09:55:14.541320    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0814 09:55:14.558548    4019 logs.go:276] 1 containers: [abfc7e49a585]
	I0814 09:55:14.558631    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0814 09:55:14.578310    4019 logs.go:276] 4 containers: [f998ec6c5355 f1f20b457441 9c8867ac9a63 b48b5e6429a9]
	I0814 09:55:14.578385    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0814 09:55:14.589482    4019 logs.go:276] 1 containers: [e3a20e2e124f]
	I0814 09:55:14.589548    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0814 09:55:14.599704    4019 logs.go:276] 1 containers: [bfe0052f807c]
	I0814 09:55:14.599783    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0814 09:55:14.610567    4019 logs.go:276] 1 containers: [d4c01ab8fbcc]
	I0814 09:55:14.610651    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0814 09:55:14.624967    4019 logs.go:276] 0 containers: []
	W0814 09:55:14.624978    4019 logs.go:278] No container was found matching "kindnet"
	I0814 09:55:14.625044    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0814 09:55:14.635501    4019 logs.go:276] 1 containers: [8b4bf1194ad2]
	I0814 09:55:14.635516    4019 logs.go:123] Gathering logs for kubelet ...
	I0814 09:55:14.635521    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0814 09:55:14.667665    4019 logs.go:138] Found kubelet problem: Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	W0814 09:55:14.667759    4019 logs.go:138] Found kubelet problem: Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	I0814 09:55:14.668901    4019 logs.go:123] Gathering logs for etcd [abfc7e49a585] ...
	I0814 09:55:14.668907    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abfc7e49a585"
	I0814 09:55:14.682657    4019 logs.go:123] Gathering logs for coredns [f998ec6c5355] ...
	I0814 09:55:14.682667    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f998ec6c5355"
	I0814 09:55:14.694573    4019 logs.go:123] Gathering logs for kube-controller-manager [d4c01ab8fbcc] ...
	I0814 09:55:14.694584    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4c01ab8fbcc"
	I0814 09:55:14.715890    4019 logs.go:123] Gathering logs for storage-provisioner [8b4bf1194ad2] ...
	I0814 09:55:14.715899    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b4bf1194ad2"
	I0814 09:55:14.727509    4019 logs.go:123] Gathering logs for dmesg ...
	I0814 09:55:14.727519    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 09:55:14.732195    4019 logs.go:123] Gathering logs for coredns [9c8867ac9a63] ...
	I0814 09:55:14.732200    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c8867ac9a63"
	I0814 09:55:14.744205    4019 logs.go:123] Gathering logs for Docker ...
	I0814 09:55:14.744215    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0814 09:55:14.767026    4019 logs.go:123] Gathering logs for container status ...
	I0814 09:55:14.767036    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 09:55:14.779654    4019 logs.go:123] Gathering logs for describe nodes ...
	I0814 09:55:14.779666    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 09:55:14.814752    4019 logs.go:123] Gathering logs for kube-apiserver [66f59011937b] ...
	I0814 09:55:14.814763    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66f59011937b"
	I0814 09:55:14.829962    4019 logs.go:123] Gathering logs for coredns [f1f20b457441] ...
	I0814 09:55:14.829972    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1f20b457441"
	I0814 09:55:14.841509    4019 logs.go:123] Gathering logs for coredns [b48b5e6429a9] ...
	I0814 09:55:14.841519    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b48b5e6429a9"
	I0814 09:55:14.852924    4019 logs.go:123] Gathering logs for kube-scheduler [e3a20e2e124f] ...
	I0814 09:55:14.852933    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a20e2e124f"
	I0814 09:55:14.867522    4019 logs.go:123] Gathering logs for kube-proxy [bfe0052f807c] ...
	I0814 09:55:14.867532    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe0052f807c"
	I0814 09:55:14.879786    4019 out.go:304] Setting ErrFile to fd 2...
	I0814 09:55:14.879797    4019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0814 09:55:14.879828    4019 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0814 09:55:14.879833    4019 out.go:239]   Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	  Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: W0814 16:51:39.143474   10542 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	W0814 09:55:14.879837    4019 out.go:239]   Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	  Aug 14 16:51:39 stopped-upgrade-996000 kubelet[10542]: E0814 16:51:39.143489   10542 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-996000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-996000' and this object
	I0814 09:55:14.879842    4019 out.go:304] Setting ErrFile to fd 2...
	I0814 09:55:14.879845    4019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:55:24.883606    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0814 09:55:29.885683    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0814 09:55:29.889996    4019 out.go:177] 
	W0814 09:55:29.894010    4019 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0814 09:55:29.894018    4019 out.go:239] * 
	* 
	W0814 09:55:29.894519    4019 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 09:55:29.902890    4019 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-996000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (587.68s)

                                                
                                    
x
+
TestPause/serial/Start (9.95s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-260000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-260000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.901829375s)

                                                
                                                
-- stdout --
	* [pause-260000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19446
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-260000" primary control-plane node in "pause-260000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-260000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-260000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-260000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-260000 -n pause-260000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-260000 -n pause-260000: exit status 7 (47.486584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-260000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (10.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-463000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-463000 --driver=qemu2 : exit status 80 (10.58066075s)

                                                
                                                
-- stdout --
	* [NoKubernetes-463000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19446
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-463000" primary control-plane node in "NoKubernetes-463000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-463000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-463000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-463000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-463000 -n NoKubernetes-463000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-463000 -n NoKubernetes-463000: exit status 7 (50.24325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-463000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (10.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-463000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-463000 --no-kubernetes --driver=qemu2 : exit status 80 (7.495469958s)

                                                
                                                
-- stdout --
	* [NoKubernetes-463000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19446
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-463000
	* Restarting existing qemu2 VM for "NoKubernetes-463000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-463000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-463000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-463000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-463000 -n NoKubernetes-463000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-463000 -n NoKubernetes-463000: exit status 7 (51.64575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-463000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (7.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-463000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-463000 --no-kubernetes --driver=qemu2 : exit status 80 (7.473518708s)

                                                
                                                
-- stdout --
	* [NoKubernetes-463000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19446
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-463000
	* Restarting existing qemu2 VM for "NoKubernetes-463000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-463000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-463000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-463000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-463000 -n NoKubernetes-463000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-463000 -n NoKubernetes-463000: exit status 7 (33.290083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-463000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (7.51s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.12s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19446
- KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2368005038/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.12s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.36s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19446
- KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2465437667/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-463000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-463000 --driver=qemu2 : exit status 80 (5.291274417s)

                                                
                                                
-- stdout --
	* [NoKubernetes-463000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19446
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-463000
	* Restarting existing qemu2 VM for "NoKubernetes-463000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-463000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-463000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-463000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-463000 -n NoKubernetes-463000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-463000 -n NoKubernetes-463000: exit status 7 (69.376917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-463000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-625000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-625000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.847128s)

                                                
                                                
-- stdout --
	* [auto-625000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19446
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-625000" primary control-plane node in "auto-625000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-625000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:57:09.853283    4508 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:57:09.853420    4508 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:57:09.853424    4508 out.go:304] Setting ErrFile to fd 2...
	I0814 09:57:09.853426    4508 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:57:09.853548    4508 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:57:09.854597    4508 out.go:298] Setting JSON to false
	I0814 09:57:09.870825    4508 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3386,"bootTime":1723651243,"procs":499,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0814 09:57:09.870925    4508 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0814 09:57:09.877520    4508 out.go:177] * [auto-625000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0814 09:57:09.885449    4508 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 09:57:09.885476    4508 notify.go:220] Checking for updates...
	I0814 09:57:09.892555    4508 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	I0814 09:57:09.895658    4508 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0814 09:57:09.898562    4508 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 09:57:09.901576    4508 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	I0814 09:57:09.904465    4508 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 09:57:09.907921    4508 config.go:182] Loaded profile config "cert-expiration-067000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:57:09.907999    4508 config.go:182] Loaded profile config "multinode-157000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:57:09.908047    4508 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 09:57:09.912518    4508 out.go:177] * Using the qemu2 driver based on user configuration
	I0814 09:57:09.919513    4508 start.go:297] selected driver: qemu2
	I0814 09:57:09.919527    4508 start.go:901] validating driver "qemu2" against <nil>
	I0814 09:57:09.919533    4508 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 09:57:09.921815    4508 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0814 09:57:09.924572    4508 out.go:177] * Automatically selected the socket_vmnet network
	I0814 09:57:09.927526    4508 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 09:57:09.927546    4508 cni.go:84] Creating CNI manager for ""
	I0814 09:57:09.927554    4508 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0814 09:57:09.927563    4508 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0814 09:57:09.927600    4508 start.go:340] cluster config:
	{Name:auto-625000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:auto-625000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 09:57:09.931172    4508 iso.go:125] acquiring lock: {Name:mkda50f549908e1040ce4e19ea38376a2f640f3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:57:09.938546    4508 out.go:177] * Starting "auto-625000" primary control-plane node in "auto-625000" cluster
	I0814 09:57:09.942535    4508 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0814 09:57:09.942552    4508 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0814 09:57:09.942562    4508 cache.go:56] Caching tarball of preloaded images
	I0814 09:57:09.942631    4508 preload.go:172] Found /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0814 09:57:09.942637    4508 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0814 09:57:09.942702    4508 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/auto-625000/config.json ...
	I0814 09:57:09.942713    4508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/auto-625000/config.json: {Name:mk34718d97cf8e3ee0c794088ad5619aca5ba515 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:57:09.942933    4508 start.go:360] acquireMachinesLock for auto-625000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:57:09.942967    4508 start.go:364] duration metric: took 28.083µs to acquireMachinesLock for "auto-625000"
	I0814 09:57:09.942980    4508 start.go:93] Provisioning new machine with config: &{Name:auto-625000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:auto-625000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0814 09:57:09.943012    4508 start.go:125] createHost starting for "" (driver="qemu2")
	I0814 09:57:09.951480    4508 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0814 09:57:09.969243    4508 start.go:159] libmachine.API.Create for "auto-625000" (driver="qemu2")
	I0814 09:57:09.969276    4508 client.go:168] LocalClient.Create starting
	I0814 09:57:09.969346    4508 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem
	I0814 09:57:09.969374    4508 main.go:141] libmachine: Decoding PEM data...
	I0814 09:57:09.969383    4508 main.go:141] libmachine: Parsing certificate...
	I0814 09:57:09.969415    4508 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/cert.pem
	I0814 09:57:09.969437    4508 main.go:141] libmachine: Decoding PEM data...
	I0814 09:57:09.969445    4508 main.go:141] libmachine: Parsing certificate...
	I0814 09:57:09.969825    4508 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19446-1067/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0814 09:57:10.120448    4508 main.go:141] libmachine: Creating SSH key...
	I0814 09:57:10.195455    4508 main.go:141] libmachine: Creating Disk image...
	I0814 09:57:10.195461    4508 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0814 09:57:10.195626    4508 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/auto-625000/disk.qcow2.raw /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/auto-625000/disk.qcow2
	I0814 09:57:10.204793    4508 main.go:141] libmachine: STDOUT: 
	I0814 09:57:10.204811    4508 main.go:141] libmachine: STDERR: 
	I0814 09:57:10.204848    4508 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/auto-625000/disk.qcow2 +20000M
	I0814 09:57:10.212826    4508 main.go:141] libmachine: STDOUT: Image resized.
	
	I0814 09:57:10.212839    4508 main.go:141] libmachine: STDERR: 
	I0814 09:57:10.212854    4508 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/auto-625000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/auto-625000/disk.qcow2
	I0814 09:57:10.212857    4508 main.go:141] libmachine: Starting QEMU VM...
	I0814 09:57:10.212870    4508 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:57:10.212892    4508 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/auto-625000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/auto-625000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/auto-625000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:e5:37:93:44:c1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/auto-625000/disk.qcow2
	I0814 09:57:10.214438    4508 main.go:141] libmachine: STDOUT: 
	I0814 09:57:10.214453    4508 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:57:10.214473    4508 client.go:171] duration metric: took 245.202459ms to LocalClient.Create
	I0814 09:57:12.216555    4508 start.go:128] duration metric: took 2.273619917s to createHost
	I0814 09:57:12.216663    4508 start.go:83] releasing machines lock for "auto-625000", held for 2.273747417s
	W0814 09:57:12.216709    4508 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:57:12.227649    4508 out.go:177] * Deleting "auto-625000" in qemu2 ...
	W0814 09:57:12.265941    4508 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:57:12.265965    4508 start.go:729] Will try again in 5 seconds ...
	I0814 09:57:17.267953    4508 start.go:360] acquireMachinesLock for auto-625000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:57:17.268493    4508 start.go:364] duration metric: took 435.75µs to acquireMachinesLock for "auto-625000"
	I0814 09:57:17.268614    4508 start.go:93] Provisioning new machine with config: &{Name:auto-625000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:auto-625000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0814 09:57:17.268862    4508 start.go:125] createHost starting for "" (driver="qemu2")
	I0814 09:57:17.279347    4508 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0814 09:57:17.329885    4508 start.go:159] libmachine.API.Create for "auto-625000" (driver="qemu2")
	I0814 09:57:17.329941    4508 client.go:168] LocalClient.Create starting
	I0814 09:57:17.330054    4508 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem
	I0814 09:57:17.330114    4508 main.go:141] libmachine: Decoding PEM data...
	I0814 09:57:17.330129    4508 main.go:141] libmachine: Parsing certificate...
	I0814 09:57:17.330202    4508 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/cert.pem
	I0814 09:57:17.330246    4508 main.go:141] libmachine: Decoding PEM data...
	I0814 09:57:17.330260    4508 main.go:141] libmachine: Parsing certificate...
	I0814 09:57:17.330801    4508 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19446-1067/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0814 09:57:17.496229    4508 main.go:141] libmachine: Creating SSH key...
	I0814 09:57:17.603206    4508 main.go:141] libmachine: Creating Disk image...
	I0814 09:57:17.603211    4508 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0814 09:57:17.603390    4508 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/auto-625000/disk.qcow2.raw /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/auto-625000/disk.qcow2
	I0814 09:57:17.612478    4508 main.go:141] libmachine: STDOUT: 
	I0814 09:57:17.612495    4508 main.go:141] libmachine: STDERR: 
	I0814 09:57:17.612539    4508 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/auto-625000/disk.qcow2 +20000M
	I0814 09:57:17.620442    4508 main.go:141] libmachine: STDOUT: Image resized.
	
	I0814 09:57:17.620457    4508 main.go:141] libmachine: STDERR: 
	I0814 09:57:17.620470    4508 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/auto-625000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/auto-625000/disk.qcow2
	I0814 09:57:17.620475    4508 main.go:141] libmachine: Starting QEMU VM...
	I0814 09:57:17.620486    4508 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:57:17.620509    4508 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/auto-625000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/auto-625000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/auto-625000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:66:90:e4:9e:0a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/auto-625000/disk.qcow2
	I0814 09:57:17.622185    4508 main.go:141] libmachine: STDOUT: 
	I0814 09:57:17.622203    4508 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:57:17.622215    4508 client.go:171] duration metric: took 292.283209ms to LocalClient.Create
	I0814 09:57:19.624320    4508 start.go:128] duration metric: took 2.355511417s to createHost
	I0814 09:57:19.624399    4508 start.go:83] releasing machines lock for "auto-625000", held for 2.355983292s
	W0814 09:57:19.624786    4508 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-625000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-625000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:57:19.638483    4508 out.go:177] 
	W0814 09:57:19.642543    4508 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0814 09:57:19.642568    4508 out.go:239] * 
	* 
	W0814 09:57:19.645321    4508 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 09:57:19.658395    4508 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-625000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-625000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.981782125s)

                                                
                                                
-- stdout --
	* [flannel-625000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19446
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-625000" primary control-plane node in "flannel-625000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-625000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:57:21.843691    4622 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:57:21.843827    4622 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:57:21.843830    4622 out.go:304] Setting ErrFile to fd 2...
	I0814 09:57:21.843833    4622 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:57:21.843966    4622 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:57:21.845046    4622 out.go:298] Setting JSON to false
	I0814 09:57:21.861372    4622 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3398,"bootTime":1723651243,"procs":501,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0814 09:57:21.861440    4622 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0814 09:57:21.868115    4622 out.go:177] * [flannel-625000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0814 09:57:21.875130    4622 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 09:57:21.875191    4622 notify.go:220] Checking for updates...
	I0814 09:57:21.882067    4622 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	I0814 09:57:21.885091    4622 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0814 09:57:21.888099    4622 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 09:57:21.891100    4622 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	I0814 09:57:21.894118    4622 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 09:57:21.897482    4622 config.go:182] Loaded profile config "cert-expiration-067000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:57:21.897551    4622 config.go:182] Loaded profile config "multinode-157000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:57:21.897599    4622 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 09:57:21.902017    4622 out.go:177] * Using the qemu2 driver based on user configuration
	I0814 09:57:21.909060    4622 start.go:297] selected driver: qemu2
	I0814 09:57:21.909067    4622 start.go:901] validating driver "qemu2" against <nil>
	I0814 09:57:21.909073    4622 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 09:57:21.911441    4622 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0814 09:57:21.915029    4622 out.go:177] * Automatically selected the socket_vmnet network
	I0814 09:57:21.918200    4622 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 09:57:21.918252    4622 cni.go:84] Creating CNI manager for "flannel"
	I0814 09:57:21.918257    4622 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0814 09:57:21.918289    4622 start.go:340] cluster config:
	{Name:flannel-625000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:flannel-625000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 09:57:21.921902    4622 iso.go:125] acquiring lock: {Name:mkda50f549908e1040ce4e19ea38376a2f640f3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:57:21.928037    4622 out.go:177] * Starting "flannel-625000" primary control-plane node in "flannel-625000" cluster
	I0814 09:57:21.932088    4622 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0814 09:57:21.932101    4622 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0814 09:57:21.932110    4622 cache.go:56] Caching tarball of preloaded images
	I0814 09:57:21.932166    4622 preload.go:172] Found /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0814 09:57:21.932172    4622 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0814 09:57:21.932230    4622 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/flannel-625000/config.json ...
	I0814 09:57:21.932241    4622 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/flannel-625000/config.json: {Name:mk8100f24399c603cdfeeb9ec14b49d605dbc2b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:57:21.932562    4622 start.go:360] acquireMachinesLock for flannel-625000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:57:21.932595    4622 start.go:364] duration metric: took 27.292µs to acquireMachinesLock for "flannel-625000"
	I0814 09:57:21.932607    4622 start.go:93] Provisioning new machine with config: &{Name:flannel-625000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:flannel-625000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0814 09:57:21.932638    4622 start.go:125] createHost starting for "" (driver="qemu2")
	I0814 09:57:21.940113    4622 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0814 09:57:21.957458    4622 start.go:159] libmachine.API.Create for "flannel-625000" (driver="qemu2")
	I0814 09:57:21.957486    4622 client.go:168] LocalClient.Create starting
	I0814 09:57:21.957542    4622 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem
	I0814 09:57:21.957573    4622 main.go:141] libmachine: Decoding PEM data...
	I0814 09:57:21.957585    4622 main.go:141] libmachine: Parsing certificate...
	I0814 09:57:21.957620    4622 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/cert.pem
	I0814 09:57:21.957641    4622 main.go:141] libmachine: Decoding PEM data...
	I0814 09:57:21.957649    4622 main.go:141] libmachine: Parsing certificate...
	I0814 09:57:21.958124    4622 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19446-1067/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0814 09:57:22.108953    4622 main.go:141] libmachine: Creating SSH key...
	I0814 09:57:22.274582    4622 main.go:141] libmachine: Creating Disk image...
	I0814 09:57:22.274588    4622 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0814 09:57:22.274801    4622 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/flannel-625000/disk.qcow2.raw /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/flannel-625000/disk.qcow2
	I0814 09:57:22.284549    4622 main.go:141] libmachine: STDOUT: 
	I0814 09:57:22.284578    4622 main.go:141] libmachine: STDERR: 
	I0814 09:57:22.284636    4622 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/flannel-625000/disk.qcow2 +20000M
	I0814 09:57:22.292608    4622 main.go:141] libmachine: STDOUT: Image resized.
	
	I0814 09:57:22.292622    4622 main.go:141] libmachine: STDERR: 
	I0814 09:57:22.292645    4622 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/flannel-625000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/flannel-625000/disk.qcow2
	I0814 09:57:22.292651    4622 main.go:141] libmachine: Starting QEMU VM...
	I0814 09:57:22.292664    4622 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:57:22.292688    4622 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/flannel-625000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/flannel-625000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/flannel-625000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:51:3b:eb:3b:fa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/flannel-625000/disk.qcow2
	I0814 09:57:22.294309    4622 main.go:141] libmachine: STDOUT: 
	I0814 09:57:22.294325    4622 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:57:22.294345    4622 client.go:171] duration metric: took 336.868459ms to LocalClient.Create
	I0814 09:57:24.296434    4622 start.go:128] duration metric: took 2.363878791s to createHost
	I0814 09:57:24.296550    4622 start.go:83] releasing machines lock for "flannel-625000", held for 2.364003917s
	W0814 09:57:24.296612    4622 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:57:24.312813    4622 out.go:177] * Deleting "flannel-625000" in qemu2 ...
	W0814 09:57:24.344103    4622 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:57:24.344133    4622 start.go:729] Will try again in 5 seconds ...
	I0814 09:57:29.346172    4622 start.go:360] acquireMachinesLock for flannel-625000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:57:29.347149    4622 start.go:364] duration metric: took 866.209µs to acquireMachinesLock for "flannel-625000"
	I0814 09:57:29.347334    4622 start.go:93] Provisioning new machine with config: &{Name:flannel-625000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:flannel-625000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0814 09:57:29.347605    4622 start.go:125] createHost starting for "" (driver="qemu2")
	I0814 09:57:29.365160    4622 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0814 09:57:29.414719    4622 start.go:159] libmachine.API.Create for "flannel-625000" (driver="qemu2")
	I0814 09:57:29.414770    4622 client.go:168] LocalClient.Create starting
	I0814 09:57:29.414887    4622 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem
	I0814 09:57:29.414950    4622 main.go:141] libmachine: Decoding PEM data...
	I0814 09:57:29.414965    4622 main.go:141] libmachine: Parsing certificate...
	I0814 09:57:29.415051    4622 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/cert.pem
	I0814 09:57:29.415096    4622 main.go:141] libmachine: Decoding PEM data...
	I0814 09:57:29.415107    4622 main.go:141] libmachine: Parsing certificate...
	I0814 09:57:29.415831    4622 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19446-1067/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0814 09:57:29.574599    4622 main.go:141] libmachine: Creating SSH key...
	I0814 09:57:29.727758    4622 main.go:141] libmachine: Creating Disk image...
	I0814 09:57:29.727764    4622 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0814 09:57:29.727963    4622 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/flannel-625000/disk.qcow2.raw /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/flannel-625000/disk.qcow2
	I0814 09:57:29.737679    4622 main.go:141] libmachine: STDOUT: 
	I0814 09:57:29.737698    4622 main.go:141] libmachine: STDERR: 
	I0814 09:57:29.737750    4622 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/flannel-625000/disk.qcow2 +20000M
	I0814 09:57:29.745916    4622 main.go:141] libmachine: STDOUT: Image resized.
	
	I0814 09:57:29.745936    4622 main.go:141] libmachine: STDERR: 
	I0814 09:57:29.745948    4622 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/flannel-625000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/flannel-625000/disk.qcow2
	I0814 09:57:29.745955    4622 main.go:141] libmachine: Starting QEMU VM...
	I0814 09:57:29.745963    4622 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:57:29.746013    4622 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/flannel-625000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/flannel-625000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/flannel-625000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:46:27:8a:71:36 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/flannel-625000/disk.qcow2
	I0814 09:57:29.747702    4622 main.go:141] libmachine: STDOUT: 
	I0814 09:57:29.747718    4622 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:57:29.747729    4622 client.go:171] duration metric: took 332.968458ms to LocalClient.Create
	I0814 09:57:31.749858    4622 start.go:128] duration metric: took 2.402291959s to createHost
	I0814 09:57:31.749958    4622 start.go:83] releasing machines lock for "flannel-625000", held for 2.402842833s
	W0814 09:57:31.750482    4622 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-625000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-625000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:57:31.760024    4622 out.go:177] 
	W0814 09:57:31.770242    4622 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0814 09:57:31.770294    4622 out.go:239] * 
	* 
	W0814 09:57:31.773158    4622 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 09:57:31.782042    4622 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-625000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-625000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.957612083s)

                                                
                                                
-- stdout --
	* [enable-default-cni-625000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19446
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-625000" primary control-plane node in "enable-default-cni-625000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-625000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:57:34.143245    4743 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:57:34.143368    4743 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:57:34.143371    4743 out.go:304] Setting ErrFile to fd 2...
	I0814 09:57:34.143373    4743 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:57:34.143768    4743 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:57:34.145236    4743 out.go:298] Setting JSON to false
	I0814 09:57:34.161765    4743 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3411,"bootTime":1723651243,"procs":503,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0814 09:57:34.161839    4743 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0814 09:57:34.168680    4743 out.go:177] * [enable-default-cni-625000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0814 09:57:34.176806    4743 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 09:57:34.176841    4743 notify.go:220] Checking for updates...
	I0814 09:57:34.184745    4743 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	I0814 09:57:34.187783    4743 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0814 09:57:34.190698    4743 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 09:57:34.193772    4743 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	I0814 09:57:34.196798    4743 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 09:57:34.200087    4743 config.go:182] Loaded profile config "cert-expiration-067000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:57:34.200164    4743 config.go:182] Loaded profile config "multinode-157000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:57:34.200211    4743 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 09:57:34.204779    4743 out.go:177] * Using the qemu2 driver based on user configuration
	I0814 09:57:34.211629    4743 start.go:297] selected driver: qemu2
	I0814 09:57:34.211636    4743 start.go:901] validating driver "qemu2" against <nil>
	I0814 09:57:34.211642    4743 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 09:57:34.213914    4743 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0814 09:57:34.216749    4743 out.go:177] * Automatically selected the socket_vmnet network
	E0814 09:57:34.219899    4743 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0814 09:57:34.219916    4743 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 09:57:34.219956    4743 cni.go:84] Creating CNI manager for "bridge"
	I0814 09:57:34.219960    4743 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0814 09:57:34.219990    4743 start.go:340] cluster config:
	{Name:enable-default-cni-625000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-625000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 09:57:34.223688    4743 iso.go:125] acquiring lock: {Name:mkda50f549908e1040ce4e19ea38376a2f640f3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:57:34.230718    4743 out.go:177] * Starting "enable-default-cni-625000" primary control-plane node in "enable-default-cni-625000" cluster
	I0814 09:57:34.234729    4743 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0814 09:57:34.234745    4743 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0814 09:57:34.234758    4743 cache.go:56] Caching tarball of preloaded images
	I0814 09:57:34.234829    4743 preload.go:172] Found /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0814 09:57:34.234845    4743 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0814 09:57:34.234932    4743 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/enable-default-cni-625000/config.json ...
	I0814 09:57:34.234944    4743 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/enable-default-cni-625000/config.json: {Name:mkb5894207038250cf2b12330d43165f47778117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:57:34.235173    4743 start.go:360] acquireMachinesLock for enable-default-cni-625000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:57:34.235212    4743 start.go:364] duration metric: took 29.5µs to acquireMachinesLock for "enable-default-cni-625000"
	I0814 09:57:34.235226    4743 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-625000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-625000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0814 09:57:34.235258    4743 start.go:125] createHost starting for "" (driver="qemu2")
	I0814 09:57:34.243606    4743 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0814 09:57:34.262239    4743 start.go:159] libmachine.API.Create for "enable-default-cni-625000" (driver="qemu2")
	I0814 09:57:34.262270    4743 client.go:168] LocalClient.Create starting
	I0814 09:57:34.262342    4743 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem
	I0814 09:57:34.262373    4743 main.go:141] libmachine: Decoding PEM data...
	I0814 09:57:34.262383    4743 main.go:141] libmachine: Parsing certificate...
	I0814 09:57:34.262419    4743 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/cert.pem
	I0814 09:57:34.262442    4743 main.go:141] libmachine: Decoding PEM data...
	I0814 09:57:34.262454    4743 main.go:141] libmachine: Parsing certificate...
	I0814 09:57:34.262897    4743 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19446-1067/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0814 09:57:34.413389    4743 main.go:141] libmachine: Creating SSH key...
	I0814 09:57:34.499776    4743 main.go:141] libmachine: Creating Disk image...
	I0814 09:57:34.499782    4743 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0814 09:57:34.499962    4743 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/enable-default-cni-625000/disk.qcow2.raw /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/enable-default-cni-625000/disk.qcow2
	I0814 09:57:34.509133    4743 main.go:141] libmachine: STDOUT: 
	I0814 09:57:34.509152    4743 main.go:141] libmachine: STDERR: 
	I0814 09:57:34.509214    4743 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/enable-default-cni-625000/disk.qcow2 +20000M
	I0814 09:57:34.517114    4743 main.go:141] libmachine: STDOUT: Image resized.
	
	I0814 09:57:34.517130    4743 main.go:141] libmachine: STDERR: 
	I0814 09:57:34.517148    4743 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/enable-default-cni-625000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/enable-default-cni-625000/disk.qcow2
	I0814 09:57:34.517153    4743 main.go:141] libmachine: Starting QEMU VM...
	I0814 09:57:34.517177    4743 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:57:34.517202    4743 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/enable-default-cni-625000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/enable-default-cni-625000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/enable-default-cni-625000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:6c:c5:01:33:39 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/enable-default-cni-625000/disk.qcow2
	I0814 09:57:34.518759    4743 main.go:141] libmachine: STDOUT: 
	I0814 09:57:34.518774    4743 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:57:34.518796    4743 client.go:171] duration metric: took 256.531167ms to LocalClient.Create
	I0814 09:57:36.520978    4743 start.go:128] duration metric: took 2.285768959s to createHost
	I0814 09:57:36.521060    4743 start.go:83] releasing machines lock for "enable-default-cni-625000", held for 2.285935625s
	W0814 09:57:36.521115    4743 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:57:36.530738    4743 out.go:177] * Deleting "enable-default-cni-625000" in qemu2 ...
	W0814 09:57:36.569558    4743 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:57:36.569586    4743 start.go:729] Will try again in 5 seconds ...
	I0814 09:57:41.571564    4743 start.go:360] acquireMachinesLock for enable-default-cni-625000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:57:41.572046    4743 start.go:364] duration metric: took 398.375µs to acquireMachinesLock for "enable-default-cni-625000"
	I0814 09:57:41.572159    4743 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-625000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-625000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0814 09:57:41.572461    4743 start.go:125] createHost starting for "" (driver="qemu2")
	I0814 09:57:41.588840    4743 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0814 09:57:41.638686    4743 start.go:159] libmachine.API.Create for "enable-default-cni-625000" (driver="qemu2")
	I0814 09:57:41.638804    4743 client.go:168] LocalClient.Create starting
	I0814 09:57:41.638919    4743 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem
	I0814 09:57:41.638974    4743 main.go:141] libmachine: Decoding PEM data...
	I0814 09:57:41.638994    4743 main.go:141] libmachine: Parsing certificate...
	I0814 09:57:41.639056    4743 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/cert.pem
	I0814 09:57:41.639099    4743 main.go:141] libmachine: Decoding PEM data...
	I0814 09:57:41.639115    4743 main.go:141] libmachine: Parsing certificate...
	I0814 09:57:41.639977    4743 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19446-1067/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0814 09:57:41.801421    4743 main.go:141] libmachine: Creating SSH key...
	I0814 09:57:41.999184    4743 main.go:141] libmachine: Creating Disk image...
	I0814 09:57:41.999191    4743 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0814 09:57:41.999390    4743 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/enable-default-cni-625000/disk.qcow2.raw /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/enable-default-cni-625000/disk.qcow2
	I0814 09:57:42.008779    4743 main.go:141] libmachine: STDOUT: 
	I0814 09:57:42.008800    4743 main.go:141] libmachine: STDERR: 
	I0814 09:57:42.008854    4743 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/enable-default-cni-625000/disk.qcow2 +20000M
	I0814 09:57:42.016825    4743 main.go:141] libmachine: STDOUT: Image resized.
	
	I0814 09:57:42.016839    4743 main.go:141] libmachine: STDERR: 
	I0814 09:57:42.016851    4743 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/enable-default-cni-625000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/enable-default-cni-625000/disk.qcow2
	I0814 09:57:42.016861    4743 main.go:141] libmachine: Starting QEMU VM...
	I0814 09:57:42.016871    4743 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:57:42.016907    4743 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/enable-default-cni-625000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/enable-default-cni-625000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/enable-default-cni-625000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:d2:b7:d1:b7:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/enable-default-cni-625000/disk.qcow2
	I0814 09:57:42.018538    4743 main.go:141] libmachine: STDOUT: 
	I0814 09:57:42.018554    4743 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:57:42.018566    4743 client.go:171] duration metric: took 379.77275ms to LocalClient.Create
	I0814 09:57:44.020653    4743 start.go:128] duration metric: took 2.448250917s to createHost
	I0814 09:57:44.020718    4743 start.go:83] releasing machines lock for "enable-default-cni-625000", held for 2.448749375s
	W0814 09:57:44.021174    4743 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-625000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-625000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:57:44.036018    4743 out.go:177] 
	W0814 09:57:44.039989    4743 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0814 09:57:44.040025    4743 out.go:239] * 
	* 
	W0814 09:57:44.042348    4743 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 09:57:44.057072    4743 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-625000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-625000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.813601792s)

                                                
                                                
-- stdout --
	* [kindnet-625000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19446
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-625000" primary control-plane node in "kindnet-625000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-625000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:57:46.285994    4852 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:57:46.286129    4852 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:57:46.286132    4852 out.go:304] Setting ErrFile to fd 2...
	I0814 09:57:46.286135    4852 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:57:46.286252    4852 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:57:46.287296    4852 out.go:298] Setting JSON to false
	I0814 09:57:46.303474    4852 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3423,"bootTime":1723651243,"procs":499,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0814 09:57:46.303551    4852 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0814 09:57:46.309515    4852 out.go:177] * [kindnet-625000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0814 09:57:46.317390    4852 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 09:57:46.317450    4852 notify.go:220] Checking for updates...
	I0814 09:57:46.321358    4852 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	I0814 09:57:46.324435    4852 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0814 09:57:46.327430    4852 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 09:57:46.330358    4852 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	I0814 09:57:46.333402    4852 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 09:57:46.336714    4852 config.go:182] Loaded profile config "cert-expiration-067000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:57:46.336782    4852 config.go:182] Loaded profile config "multinode-157000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:57:46.336841    4852 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 09:57:46.341358    4852 out.go:177] * Using the qemu2 driver based on user configuration
	I0814 09:57:46.348356    4852 start.go:297] selected driver: qemu2
	I0814 09:57:46.348363    4852 start.go:901] validating driver "qemu2" against <nil>
	I0814 09:57:46.348369    4852 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 09:57:46.350784    4852 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0814 09:57:46.353381    4852 out.go:177] * Automatically selected the socket_vmnet network
	I0814 09:57:46.356514    4852 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 09:57:46.356558    4852 cni.go:84] Creating CNI manager for "kindnet"
	I0814 09:57:46.356568    4852 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0814 09:57:46.356607    4852 start.go:340] cluster config:
	{Name:kindnet-625000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kindnet-625000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 09:57:46.360424    4852 iso.go:125] acquiring lock: {Name:mkda50f549908e1040ce4e19ea38376a2f640f3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:57:46.366400    4852 out.go:177] * Starting "kindnet-625000" primary control-plane node in "kindnet-625000" cluster
	I0814 09:57:46.370334    4852 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0814 09:57:46.370348    4852 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0814 09:57:46.370356    4852 cache.go:56] Caching tarball of preloaded images
	I0814 09:57:46.370414    4852 preload.go:172] Found /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0814 09:57:46.370420    4852 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0814 09:57:46.370483    4852 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/kindnet-625000/config.json ...
	I0814 09:57:46.370494    4852 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/kindnet-625000/config.json: {Name:mkfaf63354a8c0064301900087b608414f5e17c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:57:46.370824    4852 start.go:360] acquireMachinesLock for kindnet-625000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:57:46.370857    4852 start.go:364] duration metric: took 27.75µs to acquireMachinesLock for "kindnet-625000"
	I0814 09:57:46.370869    4852 start.go:93] Provisioning new machine with config: &{Name:kindnet-625000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kindnet-625000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0814 09:57:46.370918    4852 start.go:125] createHost starting for "" (driver="qemu2")
	I0814 09:57:46.379336    4852 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0814 09:57:46.397306    4852 start.go:159] libmachine.API.Create for "kindnet-625000" (driver="qemu2")
	I0814 09:57:46.397334    4852 client.go:168] LocalClient.Create starting
	I0814 09:57:46.397403    4852 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem
	I0814 09:57:46.397438    4852 main.go:141] libmachine: Decoding PEM data...
	I0814 09:57:46.397452    4852 main.go:141] libmachine: Parsing certificate...
	I0814 09:57:46.397487    4852 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/cert.pem
	I0814 09:57:46.397514    4852 main.go:141] libmachine: Decoding PEM data...
	I0814 09:57:46.397522    4852 main.go:141] libmachine: Parsing certificate...
	I0814 09:57:46.397872    4852 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19446-1067/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0814 09:57:46.553192    4852 main.go:141] libmachine: Creating SSH key...
	I0814 09:57:46.611432    4852 main.go:141] libmachine: Creating Disk image...
	I0814 09:57:46.611438    4852 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0814 09:57:46.611895    4852 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kindnet-625000/disk.qcow2.raw /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kindnet-625000/disk.qcow2
	I0814 09:57:46.621152    4852 main.go:141] libmachine: STDOUT: 
	I0814 09:57:46.621176    4852 main.go:141] libmachine: STDERR: 
	I0814 09:57:46.621225    4852 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kindnet-625000/disk.qcow2 +20000M
	I0814 09:57:46.629152    4852 main.go:141] libmachine: STDOUT: Image resized.
	
	I0814 09:57:46.629167    4852 main.go:141] libmachine: STDERR: 
	I0814 09:57:46.629190    4852 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kindnet-625000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kindnet-625000/disk.qcow2
	I0814 09:57:46.629196    4852 main.go:141] libmachine: Starting QEMU VM...
	I0814 09:57:46.629209    4852 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:57:46.629245    4852 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kindnet-625000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kindnet-625000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kindnet-625000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:f2:7d:aa:c6:b8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kindnet-625000/disk.qcow2
	I0814 09:57:46.630816    4852 main.go:141] libmachine: STDOUT: 
	I0814 09:57:46.630833    4852 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:57:46.630851    4852 client.go:171] duration metric: took 233.521416ms to LocalClient.Create
	I0814 09:57:48.632947    4852 start.go:128] duration metric: took 2.262105209s to createHost
	I0814 09:57:48.633035    4852 start.go:83] releasing machines lock for "kindnet-625000", held for 2.262266458s
	W0814 09:57:48.633147    4852 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:57:48.644271    4852 out.go:177] * Deleting "kindnet-625000" in qemu2 ...
	W0814 09:57:48.683823    4852 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:57:48.683844    4852 start.go:729] Will try again in 5 seconds ...
	I0814 09:57:53.685809    4852 start.go:360] acquireMachinesLock for kindnet-625000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:57:53.686272    4852 start.go:364] duration metric: took 363.334µs to acquireMachinesLock for "kindnet-625000"
	I0814 09:57:53.686383    4852 start.go:93] Provisioning new machine with config: &{Name:kindnet-625000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kindnet-625000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0814 09:57:53.686691    4852 start.go:125] createHost starting for "" (driver="qemu2")
	I0814 09:57:53.703287    4852 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0814 09:57:53.753876    4852 start.go:159] libmachine.API.Create for "kindnet-625000" (driver="qemu2")
	I0814 09:57:53.753924    4852 client.go:168] LocalClient.Create starting
	I0814 09:57:53.754042    4852 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem
	I0814 09:57:53.754108    4852 main.go:141] libmachine: Decoding PEM data...
	I0814 09:57:53.754125    4852 main.go:141] libmachine: Parsing certificate...
	I0814 09:57:53.754191    4852 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/cert.pem
	I0814 09:57:53.754236    4852 main.go:141] libmachine: Decoding PEM data...
	I0814 09:57:53.754246    4852 main.go:141] libmachine: Parsing certificate...
	I0814 09:57:53.754884    4852 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19446-1067/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0814 09:57:53.915369    4852 main.go:141] libmachine: Creating SSH key...
	I0814 09:57:54.004992    4852 main.go:141] libmachine: Creating Disk image...
	I0814 09:57:54.004997    4852 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0814 09:57:54.005178    4852 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kindnet-625000/disk.qcow2.raw /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kindnet-625000/disk.qcow2
	I0814 09:57:54.014282    4852 main.go:141] libmachine: STDOUT: 
	I0814 09:57:54.014299    4852 main.go:141] libmachine: STDERR: 
	I0814 09:57:54.014340    4852 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kindnet-625000/disk.qcow2 +20000M
	I0814 09:57:54.022350    4852 main.go:141] libmachine: STDOUT: Image resized.
	
	I0814 09:57:54.022366    4852 main.go:141] libmachine: STDERR: 
	I0814 09:57:54.022378    4852 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kindnet-625000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kindnet-625000/disk.qcow2
	I0814 09:57:54.022382    4852 main.go:141] libmachine: Starting QEMU VM...
	I0814 09:57:54.022392    4852 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:57:54.022423    4852 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kindnet-625000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kindnet-625000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kindnet-625000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:5c:4b:f9:e3:5c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kindnet-625000/disk.qcow2
	I0814 09:57:54.024020    4852 main.go:141] libmachine: STDOUT: 
	I0814 09:57:54.024037    4852 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:57:54.024049    4852 client.go:171] duration metric: took 270.131792ms to LocalClient.Create
	I0814 09:57:56.026138    4852 start.go:128] duration metric: took 2.339521042s to createHost
	I0814 09:57:56.026243    4852 start.go:83] releasing machines lock for "kindnet-625000", held for 2.339997209s
	W0814 09:57:56.026689    4852 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-625000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-625000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:57:56.040322    4852 out.go:177] 
	W0814 09:57:56.044362    4852 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0814 09:57:56.044394    4852 out.go:239] * 
	* 
	W0814 09:57:56.047346    4852 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 09:57:56.057326    4852 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-625000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-625000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.92015375s)

                                                
                                                
-- stdout --
	* [bridge-625000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19446
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-625000" primary control-plane node in "bridge-625000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-625000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:57:58.389143    4970 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:57:58.389278    4970 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:57:58.389281    4970 out.go:304] Setting ErrFile to fd 2...
	I0814 09:57:58.389284    4970 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:57:58.389415    4970 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:57:58.390438    4970 out.go:298] Setting JSON to false
	I0814 09:57:58.406608    4970 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3435,"bootTime":1723651243,"procs":499,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0814 09:57:58.406676    4970 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0814 09:57:58.412336    4970 out.go:177] * [bridge-625000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0814 09:57:58.420404    4970 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 09:57:58.420452    4970 notify.go:220] Checking for updates...
	I0814 09:57:58.428430    4970 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	I0814 09:57:58.431396    4970 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0814 09:57:58.434342    4970 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 09:57:58.437379    4970 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	I0814 09:57:58.440397    4970 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 09:57:58.443811    4970 config.go:182] Loaded profile config "cert-expiration-067000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:57:58.443877    4970 config.go:182] Loaded profile config "multinode-157000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:57:58.443928    4970 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 09:57:58.448346    4970 out.go:177] * Using the qemu2 driver based on user configuration
	I0814 09:57:58.455256    4970 start.go:297] selected driver: qemu2
	I0814 09:57:58.455263    4970 start.go:901] validating driver "qemu2" against <nil>
	I0814 09:57:58.455270    4970 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 09:57:58.457593    4970 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0814 09:57:58.460402    4970 out.go:177] * Automatically selected the socket_vmnet network
	I0814 09:57:58.463490    4970 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 09:57:58.463521    4970 cni.go:84] Creating CNI manager for "bridge"
	I0814 09:57:58.463531    4970 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0814 09:57:58.463560    4970 start.go:340] cluster config:
	{Name:bridge-625000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:bridge-625000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 09:57:58.467048    4970 iso.go:125] acquiring lock: {Name:mkda50f549908e1040ce4e19ea38376a2f640f3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:57:58.474330    4970 out.go:177] * Starting "bridge-625000" primary control-plane node in "bridge-625000" cluster
	I0814 09:57:58.478230    4970 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0814 09:57:58.478247    4970 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0814 09:57:58.478256    4970 cache.go:56] Caching tarball of preloaded images
	I0814 09:57:58.478324    4970 preload.go:172] Found /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0814 09:57:58.478329    4970 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0814 09:57:58.478392    4970 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/bridge-625000/config.json ...
	I0814 09:57:58.478409    4970 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/bridge-625000/config.json: {Name:mkd436469c0d91ff16efb567ec8918bd85c84514 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:57:58.478621    4970 start.go:360] acquireMachinesLock for bridge-625000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:57:58.478655    4970 start.go:364] duration metric: took 27.625µs to acquireMachinesLock for "bridge-625000"
	I0814 09:57:58.478667    4970 start.go:93] Provisioning new machine with config: &{Name:bridge-625000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:bridge-625000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0814 09:57:58.478699    4970 start.go:125] createHost starting for "" (driver="qemu2")
	I0814 09:57:58.487215    4970 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0814 09:57:58.504573    4970 start.go:159] libmachine.API.Create for "bridge-625000" (driver="qemu2")
	I0814 09:57:58.504597    4970 client.go:168] LocalClient.Create starting
	I0814 09:57:58.504664    4970 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem
	I0814 09:57:58.504694    4970 main.go:141] libmachine: Decoding PEM data...
	I0814 09:57:58.504703    4970 main.go:141] libmachine: Parsing certificate...
	I0814 09:57:58.504739    4970 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/cert.pem
	I0814 09:57:58.504760    4970 main.go:141] libmachine: Decoding PEM data...
	I0814 09:57:58.504768    4970 main.go:141] libmachine: Parsing certificate...
	I0814 09:57:58.505225    4970 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19446-1067/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0814 09:57:58.655547    4970 main.go:141] libmachine: Creating SSH key...
	I0814 09:57:58.786065    4970 main.go:141] libmachine: Creating Disk image...
	I0814 09:57:58.786070    4970 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0814 09:57:58.786257    4970 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/bridge-625000/disk.qcow2.raw /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/bridge-625000/disk.qcow2
	I0814 09:57:58.795869    4970 main.go:141] libmachine: STDOUT: 
	I0814 09:57:58.795898    4970 main.go:141] libmachine: STDERR: 
	I0814 09:57:58.795948    4970 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/bridge-625000/disk.qcow2 +20000M
	I0814 09:57:58.803804    4970 main.go:141] libmachine: STDOUT: Image resized.
	
	I0814 09:57:58.803820    4970 main.go:141] libmachine: STDERR: 
	I0814 09:57:58.803844    4970 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/bridge-625000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/bridge-625000/disk.qcow2
	I0814 09:57:58.803850    4970 main.go:141] libmachine: Starting QEMU VM...
	I0814 09:57:58.803862    4970 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:57:58.803890    4970 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/bridge-625000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/bridge-625000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/bridge-625000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:46:4e:71:ee:56 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/bridge-625000/disk.qcow2
	I0814 09:57:58.805473    4970 main.go:141] libmachine: STDOUT: 
	I0814 09:57:58.805488    4970 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:57:58.805511    4970 client.go:171] duration metric: took 300.921333ms to LocalClient.Create
	I0814 09:58:00.807617    4970 start.go:128] duration metric: took 2.328995s to createHost
	I0814 09:58:00.807702    4970 start.go:83] releasing machines lock for "bridge-625000", held for 2.329139542s
	W0814 09:58:00.807823    4970 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:58:00.820114    4970 out.go:177] * Deleting "bridge-625000" in qemu2 ...
	W0814 09:58:00.852916    4970 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:58:00.852948    4970 start.go:729] Will try again in 5 seconds ...
	I0814 09:58:05.854949    4970 start.go:360] acquireMachinesLock for bridge-625000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:58:05.855465    4970 start.go:364] duration metric: took 356.542µs to acquireMachinesLock for "bridge-625000"
	I0814 09:58:05.855579    4970 start.go:93] Provisioning new machine with config: &{Name:bridge-625000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:bridge-625000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0814 09:58:05.855904    4970 start.go:125] createHost starting for "" (driver="qemu2")
	I0814 09:58:05.872580    4970 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0814 09:58:05.924578    4970 start.go:159] libmachine.API.Create for "bridge-625000" (driver="qemu2")
	I0814 09:58:05.924638    4970 client.go:168] LocalClient.Create starting
	I0814 09:58:05.924747    4970 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem
	I0814 09:58:05.924816    4970 main.go:141] libmachine: Decoding PEM data...
	I0814 09:58:05.924833    4970 main.go:141] libmachine: Parsing certificate...
	I0814 09:58:05.924895    4970 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/cert.pem
	I0814 09:58:05.924937    4970 main.go:141] libmachine: Decoding PEM data...
	I0814 09:58:05.924960    4970 main.go:141] libmachine: Parsing certificate...
	I0814 09:58:05.925526    4970 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19446-1067/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0814 09:58:06.087941    4970 main.go:141] libmachine: Creating SSH key...
	I0814 09:58:06.212401    4970 main.go:141] libmachine: Creating Disk image...
	I0814 09:58:06.212408    4970 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0814 09:58:06.212574    4970 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/bridge-625000/disk.qcow2.raw /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/bridge-625000/disk.qcow2
	I0814 09:58:06.221716    4970 main.go:141] libmachine: STDOUT: 
	I0814 09:58:06.221735    4970 main.go:141] libmachine: STDERR: 
	I0814 09:58:06.221776    4970 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/bridge-625000/disk.qcow2 +20000M
	I0814 09:58:06.229646    4970 main.go:141] libmachine: STDOUT: Image resized.
	
	I0814 09:58:06.229660    4970 main.go:141] libmachine: STDERR: 
	I0814 09:58:06.229670    4970 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/bridge-625000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/bridge-625000/disk.qcow2
	I0814 09:58:06.229675    4970 main.go:141] libmachine: Starting QEMU VM...
	I0814 09:58:06.229688    4970 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:58:06.229728    4970 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/bridge-625000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/bridge-625000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/bridge-625000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:c7:26:bb:94:6c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/bridge-625000/disk.qcow2
	I0814 09:58:06.231266    4970 main.go:141] libmachine: STDOUT: 
	I0814 09:58:06.231282    4970 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:58:06.231295    4970 client.go:171] duration metric: took 306.663625ms to LocalClient.Create
	I0814 09:58:08.233383    4970 start.go:128] duration metric: took 2.377539875s to createHost
	I0814 09:58:08.233462    4970 start.go:83] releasing machines lock for "bridge-625000", held for 2.37807725s
	W0814 09:58:08.233889    4970 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-625000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-625000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:58:08.251551    4970 out.go:177] 
	W0814 09:58:08.255627    4970 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0814 09:58:08.255646    4970 out.go:239] * 
	* 
	W0814 09:58:08.257513    4970 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 09:58:08.266563    4970 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-625000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-625000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.798182375s)

                                                
                                                
-- stdout --
	* [kubenet-625000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19446
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-625000" primary control-plane node in "kubenet-625000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-625000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:58:10.492378    5082 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:58:10.492499    5082 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:58:10.492502    5082 out.go:304] Setting ErrFile to fd 2...
	I0814 09:58:10.492505    5082 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:58:10.492634    5082 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:58:10.493693    5082 out.go:298] Setting JSON to false
	I0814 09:58:10.509812    5082 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3447,"bootTime":1723651243,"procs":501,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0814 09:58:10.509913    5082 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0814 09:58:10.515660    5082 out.go:177] * [kubenet-625000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0814 09:58:10.523675    5082 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 09:58:10.523755    5082 notify.go:220] Checking for updates...
	I0814 09:58:10.530620    5082 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	I0814 09:58:10.533684    5082 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0814 09:58:10.536668    5082 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 09:58:10.539658    5082 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	I0814 09:58:10.542661    5082 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 09:58:10.545906    5082 config.go:182] Loaded profile config "cert-expiration-067000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:58:10.545973    5082 config.go:182] Loaded profile config "multinode-157000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:58:10.546038    5082 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 09:58:10.550649    5082 out.go:177] * Using the qemu2 driver based on user configuration
	I0814 09:58:10.556643    5082 start.go:297] selected driver: qemu2
	I0814 09:58:10.556652    5082 start.go:901] validating driver "qemu2" against <nil>
	I0814 09:58:10.556659    5082 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 09:58:10.559129    5082 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0814 09:58:10.561654    5082 out.go:177] * Automatically selected the socket_vmnet network
	I0814 09:58:10.564778    5082 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 09:58:10.564805    5082 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0814 09:58:10.564843    5082 start.go:340] cluster config:
	{Name:kubenet-625000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kubenet-625000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 09:58:10.568577    5082 iso.go:125] acquiring lock: {Name:mkda50f549908e1040ce4e19ea38376a2f640f3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:58:10.575660    5082 out.go:177] * Starting "kubenet-625000" primary control-plane node in "kubenet-625000" cluster
	I0814 09:58:10.579659    5082 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0814 09:58:10.579677    5082 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0814 09:58:10.579689    5082 cache.go:56] Caching tarball of preloaded images
	I0814 09:58:10.579759    5082 preload.go:172] Found /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0814 09:58:10.579765    5082 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0814 09:58:10.579835    5082 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/kubenet-625000/config.json ...
	I0814 09:58:10.579852    5082 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/kubenet-625000/config.json: {Name:mkf8cceaeadb6c90b37d6aa956c6224e9905fa7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:58:10.580180    5082 start.go:360] acquireMachinesLock for kubenet-625000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:58:10.580214    5082 start.go:364] duration metric: took 28.833µs to acquireMachinesLock for "kubenet-625000"
	I0814 09:58:10.580228    5082 start.go:93] Provisioning new machine with config: &{Name:kubenet-625000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kubenet-625000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0814 09:58:10.580258    5082 start.go:125] createHost starting for "" (driver="qemu2")
	I0814 09:58:10.588676    5082 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0814 09:58:10.606061    5082 start.go:159] libmachine.API.Create for "kubenet-625000" (driver="qemu2")
	I0814 09:58:10.606083    5082 client.go:168] LocalClient.Create starting
	I0814 09:58:10.606143    5082 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem
	I0814 09:58:10.606171    5082 main.go:141] libmachine: Decoding PEM data...
	I0814 09:58:10.606180    5082 main.go:141] libmachine: Parsing certificate...
	I0814 09:58:10.606217    5082 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/cert.pem
	I0814 09:58:10.606240    5082 main.go:141] libmachine: Decoding PEM data...
	I0814 09:58:10.606249    5082 main.go:141] libmachine: Parsing certificate...
	I0814 09:58:10.606724    5082 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19446-1067/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0814 09:58:10.759442    5082 main.go:141] libmachine: Creating SSH key...
	I0814 09:58:10.831418    5082 main.go:141] libmachine: Creating Disk image...
	I0814 09:58:10.831424    5082 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0814 09:58:10.831611    5082 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kubenet-625000/disk.qcow2.raw /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kubenet-625000/disk.qcow2
	I0814 09:58:10.840721    5082 main.go:141] libmachine: STDOUT: 
	I0814 09:58:10.840739    5082 main.go:141] libmachine: STDERR: 
	I0814 09:58:10.840782    5082 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kubenet-625000/disk.qcow2 +20000M
	I0814 09:58:10.848745    5082 main.go:141] libmachine: STDOUT: Image resized.
	
	I0814 09:58:10.848764    5082 main.go:141] libmachine: STDERR: 
	I0814 09:58:10.848782    5082 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kubenet-625000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kubenet-625000/disk.qcow2
	I0814 09:58:10.848786    5082 main.go:141] libmachine: Starting QEMU VM...
	I0814 09:58:10.848795    5082 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:58:10.848824    5082 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kubenet-625000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kubenet-625000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kubenet-625000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:65:a4:94:83:d1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kubenet-625000/disk.qcow2
	I0814 09:58:10.850450    5082 main.go:141] libmachine: STDOUT: 
	I0814 09:58:10.850466    5082 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:58:10.850491    5082 client.go:171] duration metric: took 244.413875ms to LocalClient.Create
	I0814 09:58:12.852583    5082 start.go:128] duration metric: took 2.272402375s to createHost
	I0814 09:58:12.852642    5082 start.go:83] releasing machines lock for "kubenet-625000", held for 2.272515916s
	W0814 09:58:12.852701    5082 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:58:12.863876    5082 out.go:177] * Deleting "kubenet-625000" in qemu2 ...
	W0814 09:58:12.901285    5082 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:58:12.901304    5082 start.go:729] Will try again in 5 seconds ...
	I0814 09:58:17.903238    5082 start.go:360] acquireMachinesLock for kubenet-625000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:58:17.903693    5082 start.go:364] duration metric: took 339.083µs to acquireMachinesLock for "kubenet-625000"
	I0814 09:58:17.903865    5082 start.go:93] Provisioning new machine with config: &{Name:kubenet-625000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kubenet-625000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0814 09:58:17.904203    5082 start.go:125] createHost starting for "" (driver="qemu2")
	I0814 09:58:17.912749    5082 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0814 09:58:17.964355    5082 start.go:159] libmachine.API.Create for "kubenet-625000" (driver="qemu2")
	I0814 09:58:17.964404    5082 client.go:168] LocalClient.Create starting
	I0814 09:58:17.964513    5082 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem
	I0814 09:58:17.964576    5082 main.go:141] libmachine: Decoding PEM data...
	I0814 09:58:17.964598    5082 main.go:141] libmachine: Parsing certificate...
	I0814 09:58:17.964675    5082 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/cert.pem
	I0814 09:58:17.964718    5082 main.go:141] libmachine: Decoding PEM data...
	I0814 09:58:17.964730    5082 main.go:141] libmachine: Parsing certificate...
	I0814 09:58:17.965221    5082 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19446-1067/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0814 09:58:18.127822    5082 main.go:141] libmachine: Creating SSH key...
	I0814 09:58:18.197295    5082 main.go:141] libmachine: Creating Disk image...
	I0814 09:58:18.197300    5082 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0814 09:58:18.197474    5082 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kubenet-625000/disk.qcow2.raw /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kubenet-625000/disk.qcow2
	I0814 09:58:18.206791    5082 main.go:141] libmachine: STDOUT: 
	I0814 09:58:18.206820    5082 main.go:141] libmachine: STDERR: 
	I0814 09:58:18.206876    5082 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kubenet-625000/disk.qcow2 +20000M
	I0814 09:58:18.214898    5082 main.go:141] libmachine: STDOUT: Image resized.
	
	I0814 09:58:18.214916    5082 main.go:141] libmachine: STDERR: 
	I0814 09:58:18.214936    5082 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kubenet-625000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kubenet-625000/disk.qcow2
	I0814 09:58:18.214940    5082 main.go:141] libmachine: Starting QEMU VM...
	I0814 09:58:18.214949    5082 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:58:18.214974    5082 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kubenet-625000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kubenet-625000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kubenet-625000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:5a:0d:dc:51:83 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/kubenet-625000/disk.qcow2
	I0814 09:58:18.216617    5082 main.go:141] libmachine: STDOUT: 
	I0814 09:58:18.216633    5082 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:58:18.216645    5082 client.go:171] duration metric: took 252.244083ms to LocalClient.Create
	I0814 09:58:20.218733    5082 start.go:128] duration metric: took 2.314603125s to createHost
	I0814 09:58:20.218790    5082 start.go:83] releasing machines lock for "kubenet-625000", held for 2.315153792s
	W0814 09:58:20.219110    5082 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-625000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-625000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:58:20.233764    5082 out.go:177] 
	W0814 09:58:20.237880    5082 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0814 09:58:20.237903    5082 out.go:239] * 
	* 
	W0814 09:58:20.240666    5082 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 09:58:20.250694    5082 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-625000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-625000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.825905417s)

                                                
                                                
-- stdout --
	* [custom-flannel-625000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19446
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-625000" primary control-plane node in "custom-flannel-625000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-625000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:58:22.476194    5191 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:58:22.476329    5191 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:58:22.476332    5191 out.go:304] Setting ErrFile to fd 2...
	I0814 09:58:22.476335    5191 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:58:22.476457    5191 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:58:22.477516    5191 out.go:298] Setting JSON to false
	I0814 09:58:22.494413    5191 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3459,"bootTime":1723651243,"procs":498,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0814 09:58:22.494489    5191 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0814 09:58:22.500962    5191 out.go:177] * [custom-flannel-625000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0814 09:58:22.507940    5191 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 09:58:22.508001    5191 notify.go:220] Checking for updates...
	I0814 09:58:22.514921    5191 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	I0814 09:58:22.517956    5191 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0814 09:58:22.520949    5191 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 09:58:22.523865    5191 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	I0814 09:58:22.526973    5191 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 09:58:22.530332    5191 config.go:182] Loaded profile config "cert-expiration-067000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:58:22.530406    5191 config.go:182] Loaded profile config "multinode-157000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:58:22.530456    5191 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 09:58:22.533902    5191 out.go:177] * Using the qemu2 driver based on user configuration
	I0814 09:58:22.540876    5191 start.go:297] selected driver: qemu2
	I0814 09:58:22.540881    5191 start.go:901] validating driver "qemu2" against <nil>
	I0814 09:58:22.540886    5191 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 09:58:22.543283    5191 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0814 09:58:22.544519    5191 out.go:177] * Automatically selected the socket_vmnet network
	I0814 09:58:22.547015    5191 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 09:58:22.547052    5191 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0814 09:58:22.547063    5191 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0814 09:58:22.547102    5191 start.go:340] cluster config:
	{Name:custom-flannel-625000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:custom-flannel-625000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 09:58:22.550778    5191 iso.go:125] acquiring lock: {Name:mkda50f549908e1040ce4e19ea38376a2f640f3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:58:22.557894    5191 out.go:177] * Starting "custom-flannel-625000" primary control-plane node in "custom-flannel-625000" cluster
	I0814 09:58:22.561898    5191 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0814 09:58:22.561912    5191 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0814 09:58:22.561919    5191 cache.go:56] Caching tarball of preloaded images
	I0814 09:58:22.561973    5191 preload.go:172] Found /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0814 09:58:22.561979    5191 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0814 09:58:22.562031    5191 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/custom-flannel-625000/config.json ...
	I0814 09:58:22.562042    5191 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/custom-flannel-625000/config.json: {Name:mk339ea1b30e9baf0fbe5c6937937ac47f0b8f6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:58:22.562259    5191 start.go:360] acquireMachinesLock for custom-flannel-625000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:58:22.562295    5191 start.go:364] duration metric: took 27.666µs to acquireMachinesLock for "custom-flannel-625000"
	I0814 09:58:22.562308    5191 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-625000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:custom-flannel-625000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0814 09:58:22.562340    5191 start.go:125] createHost starting for "" (driver="qemu2")
	I0814 09:58:22.570872    5191 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0814 09:58:22.587858    5191 start.go:159] libmachine.API.Create for "custom-flannel-625000" (driver="qemu2")
	I0814 09:58:22.587882    5191 client.go:168] LocalClient.Create starting
	I0814 09:58:22.587940    5191 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem
	I0814 09:58:22.587972    5191 main.go:141] libmachine: Decoding PEM data...
	I0814 09:58:22.587981    5191 main.go:141] libmachine: Parsing certificate...
	I0814 09:58:22.588013    5191 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/cert.pem
	I0814 09:58:22.588038    5191 main.go:141] libmachine: Decoding PEM data...
	I0814 09:58:22.588049    5191 main.go:141] libmachine: Parsing certificate...
	I0814 09:58:22.588403    5191 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19446-1067/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0814 09:58:22.739546    5191 main.go:141] libmachine: Creating SSH key...
	I0814 09:58:22.846268    5191 main.go:141] libmachine: Creating Disk image...
	I0814 09:58:22.846273    5191 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0814 09:58:22.846465    5191 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/custom-flannel-625000/disk.qcow2.raw /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/custom-flannel-625000/disk.qcow2
	I0814 09:58:22.855880    5191 main.go:141] libmachine: STDOUT: 
	I0814 09:58:22.855900    5191 main.go:141] libmachine: STDERR: 
	I0814 09:58:22.855940    5191 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/custom-flannel-625000/disk.qcow2 +20000M
	I0814 09:58:22.863832    5191 main.go:141] libmachine: STDOUT: Image resized.
	
	I0814 09:58:22.863845    5191 main.go:141] libmachine: STDERR: 
	I0814 09:58:22.863856    5191 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/custom-flannel-625000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/custom-flannel-625000/disk.qcow2
	I0814 09:58:22.863861    5191 main.go:141] libmachine: Starting QEMU VM...
	I0814 09:58:22.863876    5191 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:58:22.863904    5191 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/custom-flannel-625000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/custom-flannel-625000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/custom-flannel-625000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:fd:a1:a2:f2:fe -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/custom-flannel-625000/disk.qcow2
	I0814 09:58:22.865476    5191 main.go:141] libmachine: STDOUT: 
	I0814 09:58:22.865496    5191 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:58:22.865516    5191 client.go:171] duration metric: took 277.640416ms to LocalClient.Create
	I0814 09:58:24.867612    5191 start.go:128] duration metric: took 2.305351542s to createHost
	I0814 09:58:24.867657    5191 start.go:83] releasing machines lock for "custom-flannel-625000", held for 2.305453542s
	W0814 09:58:24.867718    5191 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:58:24.883799    5191 out.go:177] * Deleting "custom-flannel-625000" in qemu2 ...
	W0814 09:58:24.915131    5191 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:58:24.915159    5191 start.go:729] Will try again in 5 seconds ...
	I0814 09:58:29.917173    5191 start.go:360] acquireMachinesLock for custom-flannel-625000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:58:29.917643    5191 start.go:364] duration metric: took 378.209µs to acquireMachinesLock for "custom-flannel-625000"
	I0814 09:58:29.917769    5191 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-625000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:custom-flannel-625000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0814 09:58:29.918116    5191 start.go:125] createHost starting for "" (driver="qemu2")
	I0814 09:58:29.935924    5191 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0814 09:58:29.986880    5191 start.go:159] libmachine.API.Create for "custom-flannel-625000" (driver="qemu2")
	I0814 09:58:29.986938    5191 client.go:168] LocalClient.Create starting
	I0814 09:58:29.987038    5191 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem
	I0814 09:58:29.987108    5191 main.go:141] libmachine: Decoding PEM data...
	I0814 09:58:29.987125    5191 main.go:141] libmachine: Parsing certificate...
	I0814 09:58:29.987185    5191 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/cert.pem
	I0814 09:58:29.987228    5191 main.go:141] libmachine: Decoding PEM data...
	I0814 09:58:29.987238    5191 main.go:141] libmachine: Parsing certificate...
	I0814 09:58:29.987917    5191 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19446-1067/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0814 09:58:30.147250    5191 main.go:141] libmachine: Creating SSH key...
	I0814 09:58:30.202688    5191 main.go:141] libmachine: Creating Disk image...
	I0814 09:58:30.202695    5191 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0814 09:58:30.202871    5191 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/custom-flannel-625000/disk.qcow2.raw /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/custom-flannel-625000/disk.qcow2
	I0814 09:58:30.212008    5191 main.go:141] libmachine: STDOUT: 
	I0814 09:58:30.212024    5191 main.go:141] libmachine: STDERR: 
	I0814 09:58:30.212076    5191 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/custom-flannel-625000/disk.qcow2 +20000M
	I0814 09:58:30.219952    5191 main.go:141] libmachine: STDOUT: Image resized.
	
	I0814 09:58:30.219976    5191 main.go:141] libmachine: STDERR: 
	I0814 09:58:30.219988    5191 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/custom-flannel-625000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/custom-flannel-625000/disk.qcow2
	I0814 09:58:30.219994    5191 main.go:141] libmachine: Starting QEMU VM...
	I0814 09:58:30.220000    5191 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:58:30.220033    5191 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/custom-flannel-625000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/custom-flannel-625000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/custom-flannel-625000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:d4:3d:4e:3c:66 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/custom-flannel-625000/disk.qcow2
	I0814 09:58:30.221656    5191 main.go:141] libmachine: STDOUT: 
	I0814 09:58:30.221672    5191 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:58:30.221690    5191 client.go:171] duration metric: took 234.756584ms to LocalClient.Create
	I0814 09:58:32.223783    5191 start.go:128] duration metric: took 2.30572975s to createHost
	I0814 09:58:32.223846    5191 start.go:83] releasing machines lock for "custom-flannel-625000", held for 2.306279209s
	W0814 09:58:32.224267    5191 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-625000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-625000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:58:32.235883    5191 out.go:177] 
	W0814 09:58:32.245904    5191 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0814 09:58:32.245972    5191 out.go:239] * 
	* 
	W0814 09:58:32.248505    5191 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 09:58:32.258822    5191 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-625000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-625000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.803487834s)

                                                
                                                
-- stdout --
	* [calico-625000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19446
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-625000" primary control-plane node in "calico-625000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-625000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:58:34.664395    5308 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:58:34.664539    5308 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:58:34.664543    5308 out.go:304] Setting ErrFile to fd 2...
	I0814 09:58:34.664545    5308 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:58:34.664682    5308 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:58:34.665723    5308 out.go:298] Setting JSON to false
	I0814 09:58:34.681900    5308 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3471,"bootTime":1723651243,"procs":498,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0814 09:58:34.681964    5308 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0814 09:58:34.688164    5308 out.go:177] * [calico-625000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0814 09:58:34.696179    5308 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 09:58:34.696211    5308 notify.go:220] Checking for updates...
	I0814 09:58:34.703041    5308 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	I0814 09:58:34.706100    5308 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0814 09:58:34.709078    5308 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 09:58:34.712054    5308 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	I0814 09:58:34.715070    5308 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 09:58:34.718421    5308 config.go:182] Loaded profile config "cert-expiration-067000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:58:34.718491    5308 config.go:182] Loaded profile config "multinode-157000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:58:34.718549    5308 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 09:58:34.723054    5308 out.go:177] * Using the qemu2 driver based on user configuration
	I0814 09:58:34.730059    5308 start.go:297] selected driver: qemu2
	I0814 09:58:34.730068    5308 start.go:901] validating driver "qemu2" against <nil>
	I0814 09:58:34.730075    5308 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 09:58:34.732555    5308 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0814 09:58:34.736054    5308 out.go:177] * Automatically selected the socket_vmnet network
	I0814 09:58:34.739213    5308 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 09:58:34.739271    5308 cni.go:84] Creating CNI manager for "calico"
	I0814 09:58:34.739279    5308 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0814 09:58:34.739311    5308 start.go:340] cluster config:
	{Name:calico-625000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:calico-625000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 09:58:34.743040    5308 iso.go:125] acquiring lock: {Name:mkda50f549908e1040ce4e19ea38376a2f640f3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:58:34.751086    5308 out.go:177] * Starting "calico-625000" primary control-plane node in "calico-625000" cluster
	I0814 09:58:34.755074    5308 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0814 09:58:34.755091    5308 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0814 09:58:34.755104    5308 cache.go:56] Caching tarball of preloaded images
	I0814 09:58:34.755170    5308 preload.go:172] Found /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0814 09:58:34.755176    5308 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0814 09:58:34.755265    5308 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/calico-625000/config.json ...
	I0814 09:58:34.755283    5308 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/calico-625000/config.json: {Name:mk0c535938de146a2d6d728b371b18d376785e55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:58:34.755623    5308 start.go:360] acquireMachinesLock for calico-625000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:58:34.755665    5308 start.go:364] duration metric: took 34.542µs to acquireMachinesLock for "calico-625000"
	I0814 09:58:34.755678    5308 start.go:93] Provisioning new machine with config: &{Name:calico-625000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:calico-625000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0814 09:58:34.755706    5308 start.go:125] createHost starting for "" (driver="qemu2")
	I0814 09:58:34.764085    5308 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0814 09:58:34.782458    5308 start.go:159] libmachine.API.Create for "calico-625000" (driver="qemu2")
	I0814 09:58:34.782487    5308 client.go:168] LocalClient.Create starting
	I0814 09:58:34.782552    5308 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem
	I0814 09:58:34.782579    5308 main.go:141] libmachine: Decoding PEM data...
	I0814 09:58:34.782589    5308 main.go:141] libmachine: Parsing certificate...
	I0814 09:58:34.782624    5308 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/cert.pem
	I0814 09:58:34.782646    5308 main.go:141] libmachine: Decoding PEM data...
	I0814 09:58:34.782656    5308 main.go:141] libmachine: Parsing certificate...
	I0814 09:58:34.783078    5308 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19446-1067/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0814 09:58:34.934975    5308 main.go:141] libmachine: Creating SSH key...
	I0814 09:58:35.013043    5308 main.go:141] libmachine: Creating Disk image...
	I0814 09:58:35.013049    5308 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0814 09:58:35.013223    5308 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/calico-625000/disk.qcow2.raw /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/calico-625000/disk.qcow2
	I0814 09:58:35.022568    5308 main.go:141] libmachine: STDOUT: 
	I0814 09:58:35.022585    5308 main.go:141] libmachine: STDERR: 
	I0814 09:58:35.022635    5308 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/calico-625000/disk.qcow2 +20000M
	I0814 09:58:35.030504    5308 main.go:141] libmachine: STDOUT: Image resized.
	
	I0814 09:58:35.030525    5308 main.go:141] libmachine: STDERR: 
	I0814 09:58:35.030551    5308 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/calico-625000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/calico-625000/disk.qcow2
	I0814 09:58:35.030557    5308 main.go:141] libmachine: Starting QEMU VM...
	I0814 09:58:35.030568    5308 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:58:35.030600    5308 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/calico-625000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/calico-625000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/calico-625000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:50:df:4c:04:36 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/calico-625000/disk.qcow2
	I0814 09:58:35.032246    5308 main.go:141] libmachine: STDOUT: 
	I0814 09:58:35.032264    5308 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:58:35.032283    5308 client.go:171] duration metric: took 249.799875ms to LocalClient.Create
	I0814 09:58:37.034367    5308 start.go:128] duration metric: took 2.278739209s to createHost
	I0814 09:58:37.034426    5308 start.go:83] releasing machines lock for "calico-625000", held for 2.278850292s
	W0814 09:58:37.034532    5308 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:58:37.040979    5308 out.go:177] * Deleting "calico-625000" in qemu2 ...
	W0814 09:58:37.071764    5308 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:58:37.071786    5308 start.go:729] Will try again in 5 seconds ...
	I0814 09:58:42.073922    5308 start.go:360] acquireMachinesLock for calico-625000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:58:42.074354    5308 start.go:364] duration metric: took 340.667µs to acquireMachinesLock for "calico-625000"
	I0814 09:58:42.074476    5308 start.go:93] Provisioning new machine with config: &{Name:calico-625000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:calico-625000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0814 09:58:42.074756    5308 start.go:125] createHost starting for "" (driver="qemu2")
	I0814 09:58:42.086284    5308 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0814 09:58:42.136792    5308 start.go:159] libmachine.API.Create for "calico-625000" (driver="qemu2")
	I0814 09:58:42.136836    5308 client.go:168] LocalClient.Create starting
	I0814 09:58:42.136942    5308 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem
	I0814 09:58:42.137009    5308 main.go:141] libmachine: Decoding PEM data...
	I0814 09:58:42.137030    5308 main.go:141] libmachine: Parsing certificate...
	I0814 09:58:42.137086    5308 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/cert.pem
	I0814 09:58:42.137145    5308 main.go:141] libmachine: Decoding PEM data...
	I0814 09:58:42.137160    5308 main.go:141] libmachine: Parsing certificate...
	I0814 09:58:42.137769    5308 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19446-1067/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0814 09:58:42.297572    5308 main.go:141] libmachine: Creating SSH key...
	I0814 09:58:42.372438    5308 main.go:141] libmachine: Creating Disk image...
	I0814 09:58:42.372444    5308 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0814 09:58:42.372623    5308 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/calico-625000/disk.qcow2.raw /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/calico-625000/disk.qcow2
	I0814 09:58:42.381957    5308 main.go:141] libmachine: STDOUT: 
	I0814 09:58:42.381976    5308 main.go:141] libmachine: STDERR: 
	I0814 09:58:42.382029    5308 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/calico-625000/disk.qcow2 +20000M
	I0814 09:58:42.389927    5308 main.go:141] libmachine: STDOUT: Image resized.
	
	I0814 09:58:42.389943    5308 main.go:141] libmachine: STDERR: 
	I0814 09:58:42.389955    5308 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/calico-625000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/calico-625000/disk.qcow2
	I0814 09:58:42.389959    5308 main.go:141] libmachine: Starting QEMU VM...
	I0814 09:58:42.389968    5308 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:58:42.389988    5308 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/calico-625000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/calico-625000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/calico-625000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:d3:dd:60:28:0e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/calico-625000/disk.qcow2
	I0814 09:58:42.391641    5308 main.go:141] libmachine: STDOUT: 
	I0814 09:58:42.391658    5308 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:58:42.391675    5308 client.go:171] duration metric: took 254.846167ms to LocalClient.Create
	I0814 09:58:44.393763    5308 start.go:128] duration metric: took 2.319060584s to createHost
	I0814 09:58:44.393810    5308 start.go:83] releasing machines lock for "calico-625000", held for 2.319528041s
	W0814 09:58:44.394129    5308 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-625000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-625000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:58:44.408780    5308 out.go:177] 
	W0814 09:58:44.411774    5308 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0814 09:58:44.411806    5308 out.go:239] * 
	* 
	W0814 09:58:44.414416    5308 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 09:58:44.424756    5308 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-625000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-625000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.815711708s)

                                                
                                                
-- stdout --
	* [false-625000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19446
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-625000" primary control-plane node in "false-625000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-625000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:58:46.841071    5427 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:58:46.841206    5427 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:58:46.841209    5427 out.go:304] Setting ErrFile to fd 2...
	I0814 09:58:46.841211    5427 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:58:46.841346    5427 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:58:46.842390    5427 out.go:298] Setting JSON to false
	I0814 09:58:46.858398    5427 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3483,"bootTime":1723651243,"procs":498,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0814 09:58:46.858472    5427 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0814 09:58:46.865170    5427 out.go:177] * [false-625000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0814 09:58:46.873110    5427 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 09:58:46.873160    5427 notify.go:220] Checking for updates...
	I0814 09:58:46.880038    5427 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	I0814 09:58:46.883089    5427 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0814 09:58:46.886121    5427 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 09:58:46.889110    5427 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	I0814 09:58:46.892113    5427 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 09:58:46.895510    5427 config.go:182] Loaded profile config "cert-expiration-067000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:58:46.895584    5427 config.go:182] Loaded profile config "multinode-157000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:58:46.895640    5427 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 09:58:46.900067    5427 out.go:177] * Using the qemu2 driver based on user configuration
	I0814 09:58:46.907078    5427 start.go:297] selected driver: qemu2
	I0814 09:58:46.907086    5427 start.go:901] validating driver "qemu2" against <nil>
	I0814 09:58:46.907093    5427 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 09:58:46.909485    5427 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0814 09:58:46.912090    5427 out.go:177] * Automatically selected the socket_vmnet network
	I0814 09:58:46.915191    5427 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 09:58:46.915222    5427 cni.go:84] Creating CNI manager for "false"
	I0814 09:58:46.915261    5427 start.go:340] cluster config:
	{Name:false-625000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:false-625000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 09:58:46.918976    5427 iso.go:125] acquiring lock: {Name:mkda50f549908e1040ce4e19ea38376a2f640f3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:58:46.926070    5427 out.go:177] * Starting "false-625000" primary control-plane node in "false-625000" cluster
	I0814 09:58:46.930014    5427 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0814 09:58:46.930028    5427 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0814 09:58:46.930037    5427 cache.go:56] Caching tarball of preloaded images
	I0814 09:58:46.930090    5427 preload.go:172] Found /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0814 09:58:46.930095    5427 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0814 09:58:46.930156    5427 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/false-625000/config.json ...
	I0814 09:58:46.930167    5427 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/false-625000/config.json: {Name:mked783bc19b20bd817296edc0c461140b8f226e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:58:46.930384    5427 start.go:360] acquireMachinesLock for false-625000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:58:46.930420    5427 start.go:364] duration metric: took 28.583µs to acquireMachinesLock for "false-625000"
	I0814 09:58:46.930433    5427 start.go:93] Provisioning new machine with config: &{Name:false-625000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:false-625000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0814 09:58:46.930459    5427 start.go:125] createHost starting for "" (driver="qemu2")
	I0814 09:58:46.939102    5427 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0814 09:58:46.957028    5427 start.go:159] libmachine.API.Create for "false-625000" (driver="qemu2")
	I0814 09:58:46.957051    5427 client.go:168] LocalClient.Create starting
	I0814 09:58:46.957135    5427 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem
	I0814 09:58:46.957174    5427 main.go:141] libmachine: Decoding PEM data...
	I0814 09:58:46.957184    5427 main.go:141] libmachine: Parsing certificate...
	I0814 09:58:46.957228    5427 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/cert.pem
	I0814 09:58:46.957251    5427 main.go:141] libmachine: Decoding PEM data...
	I0814 09:58:46.957260    5427 main.go:141] libmachine: Parsing certificate...
	I0814 09:58:46.957665    5427 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19446-1067/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0814 09:58:47.108378    5427 main.go:141] libmachine: Creating SSH key...
	I0814 09:58:47.169641    5427 main.go:141] libmachine: Creating Disk image...
	I0814 09:58:47.169646    5427 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0814 09:58:47.169841    5427 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/false-625000/disk.qcow2.raw /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/false-625000/disk.qcow2
	I0814 09:58:47.179218    5427 main.go:141] libmachine: STDOUT: 
	I0814 09:58:47.179236    5427 main.go:141] libmachine: STDERR: 
	I0814 09:58:47.179284    5427 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/false-625000/disk.qcow2 +20000M
	I0814 09:58:47.187200    5427 main.go:141] libmachine: STDOUT: Image resized.
	
	I0814 09:58:47.187225    5427 main.go:141] libmachine: STDERR: 
	I0814 09:58:47.187240    5427 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/false-625000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/false-625000/disk.qcow2
	I0814 09:58:47.187244    5427 main.go:141] libmachine: Starting QEMU VM...
	I0814 09:58:47.187257    5427 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:58:47.187281    5427 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/false-625000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/false-625000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/false-625000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:f3:b7:cd:56:bd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/false-625000/disk.qcow2
	I0814 09:58:47.188951    5427 main.go:141] libmachine: STDOUT: 
	I0814 09:58:47.188965    5427 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:58:47.188991    5427 client.go:171] duration metric: took 231.94275ms to LocalClient.Create
	I0814 09:58:49.191074    5427 start.go:128] duration metric: took 2.260688417s to createHost
	I0814 09:58:49.191128    5427 start.go:83] releasing machines lock for "false-625000", held for 2.260796625s
	W0814 09:58:49.191193    5427 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:58:49.203387    5427 out.go:177] * Deleting "false-625000" in qemu2 ...
	W0814 09:58:49.239055    5427 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:58:49.239086    5427 start.go:729] Will try again in 5 seconds ...
	I0814 09:58:54.241039    5427 start.go:360] acquireMachinesLock for false-625000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:58:54.241505    5427 start.go:364] duration metric: took 362.042µs to acquireMachinesLock for "false-625000"
	I0814 09:58:54.241617    5427 start.go:93] Provisioning new machine with config: &{Name:false-625000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:false-625000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0814 09:58:54.241952    5427 start.go:125] createHost starting for "" (driver="qemu2")
	I0814 09:58:54.247676    5427 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0814 09:58:54.295023    5427 start.go:159] libmachine.API.Create for "false-625000" (driver="qemu2")
	I0814 09:58:54.295071    5427 client.go:168] LocalClient.Create starting
	I0814 09:58:54.295177    5427 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem
	I0814 09:58:54.295230    5427 main.go:141] libmachine: Decoding PEM data...
	I0814 09:58:54.295246    5427 main.go:141] libmachine: Parsing certificate...
	I0814 09:58:54.295303    5427 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/cert.pem
	I0814 09:58:54.295346    5427 main.go:141] libmachine: Decoding PEM data...
	I0814 09:58:54.295356    5427 main.go:141] libmachine: Parsing certificate...
	I0814 09:58:54.295876    5427 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19446-1067/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0814 09:58:54.456254    5427 main.go:141] libmachine: Creating SSH key...
	I0814 09:58:54.560491    5427 main.go:141] libmachine: Creating Disk image...
	I0814 09:58:54.560496    5427 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0814 09:58:54.560685    5427 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/false-625000/disk.qcow2.raw /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/false-625000/disk.qcow2
	I0814 09:58:54.570169    5427 main.go:141] libmachine: STDOUT: 
	I0814 09:58:54.570188    5427 main.go:141] libmachine: STDERR: 
	I0814 09:58:54.570232    5427 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/false-625000/disk.qcow2 +20000M
	I0814 09:58:54.578185    5427 main.go:141] libmachine: STDOUT: Image resized.
	
	I0814 09:58:54.578202    5427 main.go:141] libmachine: STDERR: 
	I0814 09:58:54.578213    5427 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/false-625000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/false-625000/disk.qcow2
	I0814 09:58:54.578218    5427 main.go:141] libmachine: Starting QEMU VM...
	I0814 09:58:54.578226    5427 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:58:54.578255    5427 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/false-625000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/false-625000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/false-625000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:92:98:bd:ea:43 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/false-625000/disk.qcow2
	I0814 09:58:54.579917    5427 main.go:141] libmachine: STDOUT: 
	I0814 09:58:54.579935    5427 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:58:54.579947    5427 client.go:171] duration metric: took 284.88375ms to LocalClient.Create
	I0814 09:58:56.582038    5427 start.go:128] duration metric: took 2.340151625s to createHost
	I0814 09:58:56.582098    5427 start.go:83] releasing machines lock for "false-625000", held for 2.340667709s
	W0814 09:58:56.582426    5427 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-625000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-625000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:58:56.592003    5427 out.go:177] 
	W0814 09:58:56.602149    5427 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0814 09:58:56.602200    5427 out.go:239] * 
	* 
	W0814 09:58:56.604645    5427 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 09:58:56.612631    5427 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (10.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-629000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
E0814 09:58:59.855578    1600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/functional-363000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-629000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (10.027134s)

                                                
                                                
-- stdout --
	* [old-k8s-version-629000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19446
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-629000" primary control-plane node in "old-k8s-version-629000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-629000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:58:58.814404    5539 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:58:58.814531    5539 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:58:58.814535    5539 out.go:304] Setting ErrFile to fd 2...
	I0814 09:58:58.814537    5539 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:58:58.814684    5539 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:58:58.815771    5539 out.go:298] Setting JSON to false
	I0814 09:58:58.831839    5539 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3495,"bootTime":1723651243,"procs":496,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0814 09:58:58.831911    5539 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0814 09:58:58.838796    5539 out.go:177] * [old-k8s-version-629000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0814 09:58:58.846694    5539 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 09:58:58.846746    5539 notify.go:220] Checking for updates...
	I0814 09:58:58.853644    5539 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	I0814 09:58:58.856639    5539 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0814 09:58:58.859702    5539 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 09:58:58.860974    5539 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	I0814 09:58:58.863665    5539 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 09:58:58.867052    5539 config.go:182] Loaded profile config "cert-expiration-067000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:58:58.867121    5539 config.go:182] Loaded profile config "multinode-157000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:58:58.867184    5539 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 09:58:58.871532    5539 out.go:177] * Using the qemu2 driver based on user configuration
	I0814 09:58:58.878713    5539 start.go:297] selected driver: qemu2
	I0814 09:58:58.878723    5539 start.go:901] validating driver "qemu2" against <nil>
	I0814 09:58:58.878731    5539 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 09:58:58.881038    5539 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0814 09:58:58.884716    5539 out.go:177] * Automatically selected the socket_vmnet network
	I0814 09:58:58.887817    5539 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 09:58:58.887845    5539 cni.go:84] Creating CNI manager for ""
	I0814 09:58:58.887852    5539 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0814 09:58:58.887895    5539 start.go:340] cluster config:
	{Name:old-k8s-version-629000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-629000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 09:58:58.891625    5539 iso.go:125] acquiring lock: {Name:mkda50f549908e1040ce4e19ea38376a2f640f3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:58:58.897641    5539 out.go:177] * Starting "old-k8s-version-629000" primary control-plane node in "old-k8s-version-629000" cluster
	I0814 09:58:58.901657    5539 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0814 09:58:58.901680    5539 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0814 09:58:58.901693    5539 cache.go:56] Caching tarball of preloaded images
	I0814 09:58:58.901749    5539 preload.go:172] Found /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0814 09:58:58.901755    5539 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0814 09:58:58.901843    5539 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/old-k8s-version-629000/config.json ...
	I0814 09:58:58.901859    5539 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/old-k8s-version-629000/config.json: {Name:mk34cde04fc89ebd508eb6ece530ef38b20dae7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:58:58.902185    5539 start.go:360] acquireMachinesLock for old-k8s-version-629000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:58:58.902224    5539 start.go:364] duration metric: took 29.125µs to acquireMachinesLock for "old-k8s-version-629000"
	I0814 09:58:58.902237    5539 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-629000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-629000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0814 09:58:58.902270    5539 start.go:125] createHost starting for "" (driver="qemu2")
	I0814 09:58:58.906626    5539 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0814 09:58:58.923830    5539 start.go:159] libmachine.API.Create for "old-k8s-version-629000" (driver="qemu2")
	I0814 09:58:58.923854    5539 client.go:168] LocalClient.Create starting
	I0814 09:58:58.923923    5539 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem
	I0814 09:58:58.923950    5539 main.go:141] libmachine: Decoding PEM data...
	I0814 09:58:58.923959    5539 main.go:141] libmachine: Parsing certificate...
	I0814 09:58:58.923992    5539 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/cert.pem
	I0814 09:58:58.924014    5539 main.go:141] libmachine: Decoding PEM data...
	I0814 09:58:58.924020    5539 main.go:141] libmachine: Parsing certificate...
	I0814 09:58:58.924429    5539 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19446-1067/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0814 09:58:59.075034    5539 main.go:141] libmachine: Creating SSH key...
	I0814 09:58:59.230164    5539 main.go:141] libmachine: Creating Disk image...
	I0814 09:58:59.230170    5539 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0814 09:58:59.230393    5539 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/old-k8s-version-629000/disk.qcow2.raw /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/old-k8s-version-629000/disk.qcow2
	I0814 09:58:59.240424    5539 main.go:141] libmachine: STDOUT: 
	I0814 09:58:59.240444    5539 main.go:141] libmachine: STDERR: 
	I0814 09:58:59.240490    5539 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/old-k8s-version-629000/disk.qcow2 +20000M
	I0814 09:58:59.248437    5539 main.go:141] libmachine: STDOUT: Image resized.
	
	I0814 09:58:59.248463    5539 main.go:141] libmachine: STDERR: 
	I0814 09:58:59.248481    5539 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/old-k8s-version-629000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/old-k8s-version-629000/disk.qcow2
	I0814 09:58:59.248486    5539 main.go:141] libmachine: Starting QEMU VM...
	I0814 09:58:59.248500    5539 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:58:59.248527    5539 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/old-k8s-version-629000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/old-k8s-version-629000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/old-k8s-version-629000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:e1:9a:b4:74:79 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/old-k8s-version-629000/disk.qcow2
	I0814 09:58:59.250180    5539 main.go:141] libmachine: STDOUT: 
	I0814 09:58:59.250194    5539 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:58:59.250214    5539 client.go:171] duration metric: took 326.370625ms to LocalClient.Create
	I0814 09:59:01.252363    5539 start.go:128] duration metric: took 2.350163166s to createHost
	I0814 09:59:01.252449    5539 start.go:83] releasing machines lock for "old-k8s-version-629000", held for 2.350317375s
	W0814 09:59:01.252527    5539 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:59:01.262819    5539 out.go:177] * Deleting "old-k8s-version-629000" in qemu2 ...
	W0814 09:59:01.293925    5539 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:59:01.293956    5539 start.go:729] Will try again in 5 seconds ...
	I0814 09:59:06.295934    5539 start.go:360] acquireMachinesLock for old-k8s-version-629000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:59:06.296422    5539 start.go:364] duration metric: took 388.542µs to acquireMachinesLock for "old-k8s-version-629000"
	I0814 09:59:06.296563    5539 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-629000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-629000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0814 09:59:06.296779    5539 start.go:125] createHost starting for "" (driver="qemu2")
	I0814 09:59:06.312577    5539 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0814 09:59:06.363688    5539 start.go:159] libmachine.API.Create for "old-k8s-version-629000" (driver="qemu2")
	I0814 09:59:06.363740    5539 client.go:168] LocalClient.Create starting
	I0814 09:59:06.363863    5539 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem
	I0814 09:59:06.363939    5539 main.go:141] libmachine: Decoding PEM data...
	I0814 09:59:06.363956    5539 main.go:141] libmachine: Parsing certificate...
	I0814 09:59:06.364021    5539 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/cert.pem
	I0814 09:59:06.364065    5539 main.go:141] libmachine: Decoding PEM data...
	I0814 09:59:06.364080    5539 main.go:141] libmachine: Parsing certificate...
	I0814 09:59:06.364899    5539 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19446-1067/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0814 09:59:06.523714    5539 main.go:141] libmachine: Creating SSH key...
	I0814 09:59:06.744674    5539 main.go:141] libmachine: Creating Disk image...
	I0814 09:59:06.744681    5539 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0814 09:59:06.744931    5539 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/old-k8s-version-629000/disk.qcow2.raw /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/old-k8s-version-629000/disk.qcow2
	I0814 09:59:06.754965    5539 main.go:141] libmachine: STDOUT: 
	I0814 09:59:06.754986    5539 main.go:141] libmachine: STDERR: 
	I0814 09:59:06.755032    5539 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/old-k8s-version-629000/disk.qcow2 +20000M
	I0814 09:59:06.762981    5539 main.go:141] libmachine: STDOUT: Image resized.
	
	I0814 09:59:06.762996    5539 main.go:141] libmachine: STDERR: 
	I0814 09:59:06.763006    5539 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/old-k8s-version-629000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/old-k8s-version-629000/disk.qcow2
	I0814 09:59:06.763009    5539 main.go:141] libmachine: Starting QEMU VM...
	I0814 09:59:06.763020    5539 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:59:06.763044    5539 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/old-k8s-version-629000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/old-k8s-version-629000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/old-k8s-version-629000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:66:a4:9e:f7:29 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/old-k8s-version-629000/disk.qcow2
	I0814 09:59:06.764636    5539 main.go:141] libmachine: STDOUT: 
	I0814 09:59:06.764651    5539 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:59:06.764662    5539 client.go:171] duration metric: took 400.934125ms to LocalClient.Create
	I0814 09:59:08.766745    5539 start.go:128] duration metric: took 2.470040458s to createHost
	I0814 09:59:08.766796    5539 start.go:83] releasing machines lock for "old-k8s-version-629000", held for 2.470443417s
	W0814 09:59:08.767159    5539 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-629000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-629000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:59:08.780835    5539 out.go:177] 
	W0814 09:59:08.785835    5539 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0814 09:59:08.785864    5539 out.go:239] * 
	* 
	W0814 09:59:08.788732    5539 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 09:59:08.798824    5539 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-629000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-629000 -n old-k8s-version-629000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-629000 -n old-k8s-version-629000: exit status 7 (67.222125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-629000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (10.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-629000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-629000 create -f testdata/busybox.yaml: exit status 1 (28.962958ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-629000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-629000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-629000 -n old-k8s-version-629000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-629000 -n old-k8s-version-629000: exit status 7 (30.391417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-629000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-629000 -n old-k8s-version-629000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-629000 -n old-k8s-version-629000: exit status 7 (30.170334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-629000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-629000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-629000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-629000 describe deploy/metrics-server -n kube-system: exit status 1 (26.265417ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-629000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-629000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-629000 -n old-k8s-version-629000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-629000 -n old-k8s-version-629000: exit status 7 (30.346375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-629000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-629000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-629000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.192241541s)

                                                
                                                
-- stdout --
	* [old-k8s-version-629000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19446
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-629000" primary control-plane node in "old-k8s-version-629000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-629000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-629000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:59:12.829882    5590 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:59:12.830005    5590 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:59:12.830008    5590 out.go:304] Setting ErrFile to fd 2...
	I0814 09:59:12.830010    5590 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:59:12.830137    5590 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:59:12.831182    5590 out.go:298] Setting JSON to false
	I0814 09:59:12.847116    5590 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3509,"bootTime":1723651243,"procs":499,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0814 09:59:12.847180    5590 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0814 09:59:12.852303    5590 out.go:177] * [old-k8s-version-629000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0814 09:59:12.859260    5590 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 09:59:12.859329    5590 notify.go:220] Checking for updates...
	I0814 09:59:12.867131    5590 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	I0814 09:59:12.870245    5590 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0814 09:59:12.873281    5590 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 09:59:12.876335    5590 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	I0814 09:59:12.879241    5590 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 09:59:12.882597    5590 config.go:182] Loaded profile config "old-k8s-version-629000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0814 09:59:12.886277    5590 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0814 09:59:12.889161    5590 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 09:59:12.893236    5590 out.go:177] * Using the qemu2 driver based on existing profile
	I0814 09:59:12.900198    5590 start.go:297] selected driver: qemu2
	I0814 09:59:12.900205    5590 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-629000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-629000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 09:59:12.900267    5590 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 09:59:12.902628    5590 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 09:59:12.902659    5590 cni.go:84] Creating CNI manager for ""
	I0814 09:59:12.902666    5590 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0814 09:59:12.902695    5590 start.go:340] cluster config:
	{Name:old-k8s-version-629000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-629000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 09:59:12.906215    5590 iso.go:125] acquiring lock: {Name:mkda50f549908e1040ce4e19ea38376a2f640f3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:59:12.913110    5590 out.go:177] * Starting "old-k8s-version-629000" primary control-plane node in "old-k8s-version-629000" cluster
	I0814 09:59:12.917272    5590 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0814 09:59:12.917289    5590 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0814 09:59:12.917303    5590 cache.go:56] Caching tarball of preloaded images
	I0814 09:59:12.917366    5590 preload.go:172] Found /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0814 09:59:12.917373    5590 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0814 09:59:12.917445    5590 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/old-k8s-version-629000/config.json ...
	I0814 09:59:12.917880    5590 start.go:360] acquireMachinesLock for old-k8s-version-629000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:59:12.917909    5590 start.go:364] duration metric: took 22.917µs to acquireMachinesLock for "old-k8s-version-629000"
	I0814 09:59:12.917918    5590 start.go:96] Skipping create...Using existing machine configuration
	I0814 09:59:12.917922    5590 fix.go:54] fixHost starting: 
	I0814 09:59:12.918038    5590 fix.go:112] recreateIfNeeded on old-k8s-version-629000: state=Stopped err=<nil>
	W0814 09:59:12.918045    5590 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 09:59:12.922228    5590 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-629000" ...
	I0814 09:59:12.930262    5590 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:59:12.930307    5590 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/old-k8s-version-629000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/old-k8s-version-629000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/old-k8s-version-629000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:66:a4:9e:f7:29 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/old-k8s-version-629000/disk.qcow2
	I0814 09:59:12.932231    5590 main.go:141] libmachine: STDOUT: 
	I0814 09:59:12.932247    5590 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:59:12.932276    5590 fix.go:56] duration metric: took 14.351708ms for fixHost
	I0814 09:59:12.932280    5590 start.go:83] releasing machines lock for "old-k8s-version-629000", held for 14.3675ms
	W0814 09:59:12.932286    5590 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0814 09:59:12.932340    5590 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:59:12.932345    5590 start.go:729] Will try again in 5 seconds ...
	I0814 09:59:17.934248    5590 start.go:360] acquireMachinesLock for old-k8s-version-629000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:59:17.934725    5590 start.go:364] duration metric: took 392.75µs to acquireMachinesLock for "old-k8s-version-629000"
	I0814 09:59:17.934869    5590 start.go:96] Skipping create...Using existing machine configuration
	I0814 09:59:17.934888    5590 fix.go:54] fixHost starting: 
	I0814 09:59:17.935637    5590 fix.go:112] recreateIfNeeded on old-k8s-version-629000: state=Stopped err=<nil>
	W0814 09:59:17.935662    5590 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 09:59:17.944004    5590 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-629000" ...
	I0814 09:59:17.946967    5590 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:59:17.947241    5590 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/old-k8s-version-629000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/old-k8s-version-629000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/old-k8s-version-629000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:66:a4:9e:f7:29 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/old-k8s-version-629000/disk.qcow2
	I0814 09:59:17.956301    5590 main.go:141] libmachine: STDOUT: 
	I0814 09:59:17.956374    5590 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:59:17.956474    5590 fix.go:56] duration metric: took 21.583042ms for fixHost
	I0814 09:59:17.956494    5590 start.go:83] releasing machines lock for "old-k8s-version-629000", held for 21.716625ms
	W0814 09:59:17.956722    5590 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-629000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-629000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:59:17.965003    5590 out.go:177] 
	W0814 09:59:17.968990    5590 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0814 09:59:17.969049    5590 out.go:239] * 
	* 
	W0814 09:59:17.971987    5590 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 09:59:17.979974    5590 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-629000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-629000 -n old-k8s-version-629000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-629000 -n old-k8s-version-629000: exit status 7 (67.801375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-629000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-629000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-629000 -n old-k8s-version-629000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-629000 -n old-k8s-version-629000: exit status 7 (33.015916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-629000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-629000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-629000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-629000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.70675ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-629000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-629000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-629000 -n old-k8s-version-629000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-629000 -n old-k8s-version-629000: exit status 7 (30.615875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-629000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-629000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-629000 -n old-k8s-version-629000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-629000 -n old-k8s-version-629000: exit status 7 (30.120917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-629000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-629000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-629000 --alsologtostderr -v=1: exit status 83 (40.433666ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-629000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-629000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:59:18.254884    5612 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:59:18.255288    5612 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:59:18.255292    5612 out.go:304] Setting ErrFile to fd 2...
	I0814 09:59:18.255295    5612 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:59:18.255453    5612 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:59:18.255657    5612 out.go:298] Setting JSON to false
	I0814 09:59:18.255666    5612 mustload.go:65] Loading cluster: old-k8s-version-629000
	I0814 09:59:18.255867    5612 config.go:182] Loaded profile config "old-k8s-version-629000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0814 09:59:18.260134    5612 out.go:177] * The control-plane node old-k8s-version-629000 host is not running: state=Stopped
	I0814 09:59:18.263054    5612 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-629000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-629000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-629000 -n old-k8s-version-629000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-629000 -n old-k8s-version-629000: exit status 7 (30.800625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-629000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-629000 -n old-k8s-version-629000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-629000 -n old-k8s-version-629000: exit status 7 (29.966709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-629000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-843000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-843000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (9.881344458s)

                                                
                                                
-- stdout --
	* [no-preload-843000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19446
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-843000" primary control-plane node in "no-preload-843000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-843000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:59:18.576921    5629 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:59:18.577052    5629 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:59:18.577055    5629 out.go:304] Setting ErrFile to fd 2...
	I0814 09:59:18.577057    5629 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:59:18.577172    5629 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:59:18.578272    5629 out.go:298] Setting JSON to false
	I0814 09:59:18.594169    5629 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3515,"bootTime":1723651243,"procs":498,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0814 09:59:18.594238    5629 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0814 09:59:18.599125    5629 out.go:177] * [no-preload-843000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0814 09:59:18.605068    5629 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 09:59:18.605120    5629 notify.go:220] Checking for updates...
	I0814 09:59:18.611036    5629 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	I0814 09:59:18.614060    5629 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0814 09:59:18.617064    5629 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 09:59:18.619992    5629 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	I0814 09:59:18.623008    5629 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 09:59:18.626307    5629 config.go:182] Loaded profile config "cert-expiration-067000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:59:18.626365    5629 config.go:182] Loaded profile config "multinode-157000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:59:18.626415    5629 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 09:59:18.630011    5629 out.go:177] * Using the qemu2 driver based on user configuration
	I0814 09:59:18.637059    5629 start.go:297] selected driver: qemu2
	I0814 09:59:18.637068    5629 start.go:901] validating driver "qemu2" against <nil>
	I0814 09:59:18.637076    5629 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 09:59:18.639383    5629 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0814 09:59:18.640558    5629 out.go:177] * Automatically selected the socket_vmnet network
	I0814 09:59:18.643117    5629 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 09:59:18.643137    5629 cni.go:84] Creating CNI manager for ""
	I0814 09:59:18.643144    5629 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0814 09:59:18.643148    5629 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0814 09:59:18.643185    5629 start.go:340] cluster config:
	{Name:no-preload-843000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-843000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 09:59:18.646771    5629 iso.go:125] acquiring lock: {Name:mkda50f549908e1040ce4e19ea38376a2f640f3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:59:18.654034    5629 out.go:177] * Starting "no-preload-843000" primary control-plane node in "no-preload-843000" cluster
	I0814 09:59:18.658039    5629 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0814 09:59:18.658142    5629 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/no-preload-843000/config.json ...
	I0814 09:59:18.658166    5629 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/no-preload-843000/config.json: {Name:mk9c85959c3b5d511482ffd31e3558269ed50cf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:59:18.658173    5629 cache.go:107] acquiring lock: {Name:mkeb300ad6a0c77d0f7b70e9bc394a0bde46d181 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:59:18.658188    5629 cache.go:107] acquiring lock: {Name:mk5fd861231df5b1cda3ff3fa54d336af27b1727 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:59:18.658185    5629 cache.go:107] acquiring lock: {Name:mkb86e70acb6a2ad19ceb6ec70bcc143a32db50c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:59:18.658253    5629 cache.go:115] /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0814 09:59:18.658277    5629 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 90.042µs
	I0814 09:59:18.658289    5629 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0814 09:59:18.658296    5629 cache.go:107] acquiring lock: {Name:mkea5323f7ef9d880ecfe3ee697c359223d50604 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:59:18.658365    5629 cache.go:107] acquiring lock: {Name:mka06e641ac2c98a9a4dbbce88bc2d091b295a0e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:59:18.658364    5629 cache.go:107] acquiring lock: {Name:mkdd836c5aada6b5f2c94ce532b2a564078db86b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:59:18.658385    5629 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0814 09:59:18.658389    5629 cache.go:107] acquiring lock: {Name:mkfd83cd9741cce3f1c9a99b465ed0babefe1730 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:59:18.658432    5629 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 09:59:18.658409    5629 cache.go:107] acquiring lock: {Name:mk8567429bb8f01e257d0359d9e5b5282ac17105 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:59:18.658455    5629 start.go:360] acquireMachinesLock for no-preload-843000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:59:18.658552    5629 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0814 09:59:18.658580    5629 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0814 09:59:18.658598    5629 start.go:364] duration metric: took 133.667µs to acquireMachinesLock for "no-preload-843000"
	I0814 09:59:18.658604    5629 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0814 09:59:18.658696    5629 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0814 09:59:18.658698    5629 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0814 09:59:18.658661    5629 start.go:93] Provisioning new machine with config: &{Name:no-preload-843000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:no-preload-843000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0814 09:59:18.658748    5629 start.go:125] createHost starting for "" (driver="qemu2")
	I0814 09:59:18.667023    5629 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0814 09:59:18.672222    5629 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0814 09:59:18.672281    5629 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0814 09:59:18.672354    5629 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0814 09:59:18.672829    5629 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0814 09:59:18.675227    5629 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0814 09:59:18.675570    5629 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0814 09:59:18.675622    5629 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 09:59:18.684923    5629 start.go:159] libmachine.API.Create for "no-preload-843000" (driver="qemu2")
	I0814 09:59:18.684939    5629 client.go:168] LocalClient.Create starting
	I0814 09:59:18.685009    5629 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem
	I0814 09:59:18.685039    5629 main.go:141] libmachine: Decoding PEM data...
	I0814 09:59:18.685052    5629 main.go:141] libmachine: Parsing certificate...
	I0814 09:59:18.685091    5629 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/cert.pem
	I0814 09:59:18.685114    5629 main.go:141] libmachine: Decoding PEM data...
	I0814 09:59:18.685119    5629 main.go:141] libmachine: Parsing certificate...
	I0814 09:59:18.685401    5629 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19446-1067/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0814 09:59:18.838159    5629 main.go:141] libmachine: Creating SSH key...
	I0814 09:59:18.940798    5629 main.go:141] libmachine: Creating Disk image...
	I0814 09:59:18.940832    5629 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0814 09:59:18.941045    5629 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/no-preload-843000/disk.qcow2.raw /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/no-preload-843000/disk.qcow2
	I0814 09:59:18.950391    5629 main.go:141] libmachine: STDOUT: 
	I0814 09:59:18.950413    5629 main.go:141] libmachine: STDERR: 
	I0814 09:59:18.950480    5629 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/no-preload-843000/disk.qcow2 +20000M
	I0814 09:59:18.959942    5629 main.go:141] libmachine: STDOUT: Image resized.
	
	I0814 09:59:18.959972    5629 main.go:141] libmachine: STDERR: 
	I0814 09:59:18.959991    5629 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/no-preload-843000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/no-preload-843000/disk.qcow2
	I0814 09:59:18.959994    5629 main.go:141] libmachine: Starting QEMU VM...
	I0814 09:59:18.960016    5629 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:59:18.960054    5629 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/no-preload-843000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/no-preload-843000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/no-preload-843000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:fa:35:38:7e:43 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/no-preload-843000/disk.qcow2
	I0814 09:59:18.961875    5629 main.go:141] libmachine: STDOUT: 
	I0814 09:59:18.961895    5629 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:59:18.961928    5629 client.go:171] duration metric: took 276.995291ms to LocalClient.Create
	I0814 09:59:19.056859    5629 cache.go:162] opening:  /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0814 09:59:19.070014    5629 cache.go:162] opening:  /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0
	I0814 09:59:19.100282    5629 cache.go:162] opening:  /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0814 09:59:19.105775    5629 cache.go:162] opening:  /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0
	I0814 09:59:19.124570    5629 cache.go:162] opening:  /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I0814 09:59:19.182409    5629 cache.go:162] opening:  /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0814 09:59:19.217323    5629 cache.go:157] /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0814 09:59:19.217364    5629 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 559.087208ms
	I0814 09:59:19.217385    5629 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0814 09:59:19.225678    5629 cache.go:162] opening:  /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0
	I0814 09:59:20.962038    5629 start.go:128] duration metric: took 2.303363333s to createHost
	I0814 09:59:20.962115    5629 start.go:83] releasing machines lock for "no-preload-843000", held for 2.303598334s
	W0814 09:59:20.962167    5629 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:59:20.980212    5629 out.go:177] * Deleting "no-preload-843000" in qemu2 ...
	W0814 09:59:21.016690    5629 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:59:21.016726    5629 start.go:729] Will try again in 5 seconds ...
	I0814 09:59:22.120345    5629 cache.go:157] /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 exists
	I0814 09:59:22.120409    5629 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0" -> "/Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0" took 3.462390958s
	I0814 09:59:22.120468    5629 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0 -> /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 succeeded
	I0814 09:59:22.274386    5629 cache.go:157] /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 exists
	I0814 09:59:22.274438    5629 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0" -> "/Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0" took 3.616294084s
	I0814 09:59:22.274494    5629 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0 -> /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 succeeded
	I0814 09:59:22.855632    5629 cache.go:157] /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0814 09:59:22.855687    5629 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 4.197522584s
	I0814 09:59:22.855732    5629 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0814 09:59:23.039366    5629 cache.go:157] /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 exists
	I0814 09:59:23.039414    5629 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0" -> "/Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0" took 4.381415416s
	I0814 09:59:23.039437    5629 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0 -> /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 succeeded
	I0814 09:59:23.231795    5629 cache.go:157] /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 exists
	I0814 09:59:23.231866    5629 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0" -> "/Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0" took 4.573721083s
	I0814 09:59:23.231901    5629 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0 -> /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 succeeded
	I0814 09:59:25.911302    5629 cache.go:157] /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0814 09:59:25.911351    5629 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 7.253350084s
	I0814 09:59:25.911375    5629 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0814 09:59:25.911423    5629 cache.go:87] Successfully saved all images to host disk.
	I0814 09:59:26.016893    5629 start.go:360] acquireMachinesLock for no-preload-843000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:59:26.017323    5629 start.go:364] duration metric: took 361.791µs to acquireMachinesLock for "no-preload-843000"
	I0814 09:59:26.017419    5629 start.go:93] Provisioning new machine with config: &{Name:no-preload-843000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:no-preload-843000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0814 09:59:26.017684    5629 start.go:125] createHost starting for "" (driver="qemu2")
	I0814 09:59:26.029169    5629 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0814 09:59:26.081243    5629 start.go:159] libmachine.API.Create for "no-preload-843000" (driver="qemu2")
	I0814 09:59:26.081287    5629 client.go:168] LocalClient.Create starting
	I0814 09:59:26.081401    5629 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem
	I0814 09:59:26.081466    5629 main.go:141] libmachine: Decoding PEM data...
	I0814 09:59:26.081487    5629 main.go:141] libmachine: Parsing certificate...
	I0814 09:59:26.081567    5629 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/cert.pem
	I0814 09:59:26.081612    5629 main.go:141] libmachine: Decoding PEM data...
	I0814 09:59:26.081628    5629 main.go:141] libmachine: Parsing certificate...
	I0814 09:59:26.082156    5629 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19446-1067/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0814 09:59:26.250864    5629 main.go:141] libmachine: Creating SSH key...
	I0814 09:59:26.358060    5629 main.go:141] libmachine: Creating Disk image...
	I0814 09:59:26.358070    5629 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0814 09:59:26.358259    5629 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/no-preload-843000/disk.qcow2.raw /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/no-preload-843000/disk.qcow2
	I0814 09:59:26.367602    5629 main.go:141] libmachine: STDOUT: 
	I0814 09:59:26.367627    5629 main.go:141] libmachine: STDERR: 
	I0814 09:59:26.367679    5629 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/no-preload-843000/disk.qcow2 +20000M
	I0814 09:59:26.375627    5629 main.go:141] libmachine: STDOUT: Image resized.
	
	I0814 09:59:26.375642    5629 main.go:141] libmachine: STDERR: 
	I0814 09:59:26.375656    5629 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/no-preload-843000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/no-preload-843000/disk.qcow2
	I0814 09:59:26.375661    5629 main.go:141] libmachine: Starting QEMU VM...
	I0814 09:59:26.375674    5629 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:59:26.375715    5629 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/no-preload-843000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/no-preload-843000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/no-preload-843000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:12:19:cd:28:6e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/no-preload-843000/disk.qcow2
	I0814 09:59:26.377349    5629 main.go:141] libmachine: STDOUT: 
	I0814 09:59:26.377365    5629 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:59:26.377378    5629 client.go:171] duration metric: took 296.099292ms to LocalClient.Create
	I0814 09:59:28.379580    5629 start.go:128] duration metric: took 2.361968167s to createHost
	I0814 09:59:28.379629    5629 start.go:83] releasing machines lock for "no-preload-843000", held for 2.362387s
	W0814 09:59:28.379925    5629 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-843000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-843000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:59:28.394529    5629 out.go:177] 
	W0814 09:59:28.398700    5629 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0814 09:59:28.398733    5629 out.go:239] * 
	* 
	W0814 09:59:28.401088    5629 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 09:59:28.415477    5629 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-843000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-843000 -n no-preload-843000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-843000 -n no-preload-843000: exit status 7 (68.882542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-843000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-843000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-843000 create -f testdata/busybox.yaml: exit status 1 (28.702875ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-843000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-843000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-843000 -n no-preload-843000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-843000 -n no-preload-843000: exit status 7 (30.326625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-843000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-843000 -n no-preload-843000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-843000 -n no-preload-843000: exit status 7 (31.158666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-843000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-843000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-843000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-843000 describe deploy/metrics-server -n kube-system: exit status 1 (26.770792ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-843000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-843000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-843000 -n no-preload-843000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-843000 -n no-preload-843000: exit status 7 (30.529125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-843000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-843000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-843000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.173150959s)

                                                
                                                
-- stdout --
	* [no-preload-843000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19446
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-843000" primary control-plane node in "no-preload-843000" cluster
	* Restarting existing qemu2 VM for "no-preload-843000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-843000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:59:32.619434    5707 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:59:32.619571    5707 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:59:32.619574    5707 out.go:304] Setting ErrFile to fd 2...
	I0814 09:59:32.619577    5707 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:59:32.619698    5707 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:59:32.620720    5707 out.go:298] Setting JSON to false
	I0814 09:59:32.636701    5707 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3529,"bootTime":1723651243,"procs":498,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0814 09:59:32.636768    5707 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0814 09:59:32.641567    5707 out.go:177] * [no-preload-843000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0814 09:59:32.648618    5707 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 09:59:32.648655    5707 notify.go:220] Checking for updates...
	I0814 09:59:32.656575    5707 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	I0814 09:59:32.659528    5707 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0814 09:59:32.662511    5707 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 09:59:32.665572    5707 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	I0814 09:59:32.666887    5707 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 09:59:32.669836    5707 config.go:182] Loaded profile config "no-preload-843000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:59:32.670098    5707 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 09:59:32.673504    5707 out.go:177] * Using the qemu2 driver based on existing profile
	I0814 09:59:32.678491    5707 start.go:297] selected driver: qemu2
	I0814 09:59:32.678498    5707 start.go:901] validating driver "qemu2" against &{Name:no-preload-843000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:no-preload-843000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 09:59:32.678550    5707 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 09:59:32.680634    5707 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 09:59:32.680681    5707 cni.go:84] Creating CNI manager for ""
	I0814 09:59:32.680687    5707 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0814 09:59:32.680711    5707 start.go:340] cluster config:
	{Name:no-preload-843000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-843000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 09:59:32.683933    5707 iso.go:125] acquiring lock: {Name:mkda50f549908e1040ce4e19ea38376a2f640f3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:59:32.692503    5707 out.go:177] * Starting "no-preload-843000" primary control-plane node in "no-preload-843000" cluster
	I0814 09:59:32.695512    5707 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0814 09:59:32.695571    5707 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/no-preload-843000/config.json ...
	I0814 09:59:32.695598    5707 cache.go:107] acquiring lock: {Name:mk5fd861231df5b1cda3ff3fa54d336af27b1727 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:59:32.695607    5707 cache.go:107] acquiring lock: {Name:mkfd83cd9741cce3f1c9a99b465ed0babefe1730 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:59:32.695629    5707 cache.go:107] acquiring lock: {Name:mka06e641ac2c98a9a4dbbce88bc2d091b295a0e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:59:32.695647    5707 cache.go:115] /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0814 09:59:32.695652    5707 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 56.5µs
	I0814 09:59:32.695662    5707 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0814 09:59:32.695668    5707 cache.go:107] acquiring lock: {Name:mk8567429bb8f01e257d0359d9e5b5282ac17105 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:59:32.695675    5707 cache.go:115] /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 exists
	I0814 09:59:32.695682    5707 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0" -> "/Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0" took 80.625µs
	I0814 09:59:32.695685    5707 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0 -> /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 succeeded
	I0814 09:59:32.695692    5707 cache.go:115] /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0814 09:59:32.695691    5707 cache.go:107] acquiring lock: {Name:mkea5323f7ef9d880ecfe3ee697c359223d50604 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:59:32.695700    5707 cache.go:115] /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0814 09:59:32.695703    5707 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 35.583µs
	I0814 09:59:32.695706    5707 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0814 09:59:32.695749    5707 cache.go:115] /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 exists
	I0814 09:59:32.695753    5707 cache.go:107] acquiring lock: {Name:mkdd836c5aada6b5f2c94ce532b2a564078db86b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:59:32.695696    5707 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 79µs
	I0814 09:59:32.695785    5707 cache.go:107] acquiring lock: {Name:mkb86e70acb6a2ad19ceb6ec70bcc143a32db50c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:59:32.695788    5707 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0814 09:59:32.695803    5707 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0" -> "/Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0" took 105.583µs
	I0814 09:59:32.695810    5707 cache.go:107] acquiring lock: {Name:mkeb300ad6a0c77d0f7b70e9bc394a0bde46d181 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:59:32.695818    5707 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0 -> /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 succeeded
	I0814 09:59:32.695837    5707 cache.go:115] /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 exists
	I0814 09:59:32.695845    5707 cache.go:115] /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0814 09:59:32.695871    5707 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 118.709µs
	I0814 09:59:32.695877    5707 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0814 09:59:32.695846    5707 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0" -> "/Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0" took 91.875µs
	I0814 09:59:32.695877    5707 cache.go:115] /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 exists
	I0814 09:59:32.695882    5707 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0 -> /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 succeeded
	I0814 09:59:32.695887    5707 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0" -> "/Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0" took 93.917µs
	I0814 09:59:32.695896    5707 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0 -> /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 succeeded
	I0814 09:59:32.695901    5707 cache.go:87] Successfully saved all images to host disk.
	I0814 09:59:32.695969    5707 start.go:360] acquireMachinesLock for no-preload-843000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:59:32.695999    5707 start.go:364] duration metric: took 22.875µs to acquireMachinesLock for "no-preload-843000"
	I0814 09:59:32.696008    5707 start.go:96] Skipping create...Using existing machine configuration
	I0814 09:59:32.696013    5707 fix.go:54] fixHost starting: 
	I0814 09:59:32.696123    5707 fix.go:112] recreateIfNeeded on no-preload-843000: state=Stopped err=<nil>
	W0814 09:59:32.696132    5707 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 09:59:32.704530    5707 out.go:177] * Restarting existing qemu2 VM for "no-preload-843000" ...
	I0814 09:59:32.707580    5707 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:59:32.707616    5707 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/no-preload-843000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/no-preload-843000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/no-preload-843000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:12:19:cd:28:6e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/no-preload-843000/disk.qcow2
	I0814 09:59:32.709554    5707 main.go:141] libmachine: STDOUT: 
	I0814 09:59:32.709570    5707 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:59:32.709594    5707 fix.go:56] duration metric: took 13.581458ms for fixHost
	I0814 09:59:32.709598    5707 start.go:83] releasing machines lock for "no-preload-843000", held for 13.595875ms
	W0814 09:59:32.709604    5707 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0814 09:59:32.709636    5707 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:59:32.709641    5707 start.go:729] Will try again in 5 seconds ...
	I0814 09:59:37.711610    5707 start.go:360] acquireMachinesLock for no-preload-843000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:59:37.712039    5707 start.go:364] duration metric: took 324.917µs to acquireMachinesLock for "no-preload-843000"
	I0814 09:59:37.712193    5707 start.go:96] Skipping create...Using existing machine configuration
	I0814 09:59:37.712214    5707 fix.go:54] fixHost starting: 
	I0814 09:59:37.712948    5707 fix.go:112] recreateIfNeeded on no-preload-843000: state=Stopped err=<nil>
	W0814 09:59:37.712974    5707 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 09:59:37.716567    5707 out.go:177] * Restarting existing qemu2 VM for "no-preload-843000" ...
	I0814 09:59:37.721416    5707 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:59:37.721683    5707 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/no-preload-843000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/no-preload-843000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/no-preload-843000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:12:19:cd:28:6e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/no-preload-843000/disk.qcow2
	I0814 09:59:37.731545    5707 main.go:141] libmachine: STDOUT: 
	I0814 09:59:37.731626    5707 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:59:37.731730    5707 fix.go:56] duration metric: took 19.516959ms for fixHost
	I0814 09:59:37.731750    5707 start.go:83] releasing machines lock for "no-preload-843000", held for 19.689ms
	W0814 09:59:37.731976    5707 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-843000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-843000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:59:37.737563    5707 out.go:177] 
	W0814 09:59:37.740400    5707 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0814 09:59:37.740417    5707 out.go:239] * 
	* 
	W0814 09:59:37.742183    5707 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 09:59:37.750456    5707 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-843000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-843000 -n no-preload-843000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-843000 -n no-preload-843000: exit status 7 (68.484ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-843000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-843000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-843000 -n no-preload-843000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-843000 -n no-preload-843000: exit status 7 (33.2635ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-843000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-843000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-843000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-843000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.431417ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-843000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-843000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-843000 -n no-preload-843000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-843000 -n no-preload-843000: exit status 7 (30.785416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-843000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-843000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-843000 -n no-preload-843000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-843000 -n no-preload-843000: exit status 7 (30.764084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-843000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-843000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-843000 --alsologtostderr -v=1: exit status 83 (39.859875ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-843000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-843000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:59:38.027173    5728 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:59:38.027341    5728 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:59:38.027344    5728 out.go:304] Setting ErrFile to fd 2...
	I0814 09:59:38.027347    5728 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:59:38.027488    5728 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:59:38.027730    5728 out.go:298] Setting JSON to false
	I0814 09:59:38.027739    5728 mustload.go:65] Loading cluster: no-preload-843000
	I0814 09:59:38.027916    5728 config.go:182] Loaded profile config "no-preload-843000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:59:38.032326    5728 out.go:177] * The control-plane node no-preload-843000 host is not running: state=Stopped
	I0814 09:59:38.033543    5728 out.go:177]   To start a cluster, run: "minikube start -p no-preload-843000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-843000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-843000 -n no-preload-843000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-843000 -n no-preload-843000: exit status 7 (30.134917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-843000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-843000 -n no-preload-843000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-843000 -n no-preload-843000: exit status 7 (29.862958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-843000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-377000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-377000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (10.017895583s)

                                                
                                                
-- stdout --
	* [embed-certs-377000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19446
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-377000" primary control-plane node in "embed-certs-377000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-377000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:59:38.340340    5745 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:59:38.340462    5745 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:59:38.340465    5745 out.go:304] Setting ErrFile to fd 2...
	I0814 09:59:38.340471    5745 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:59:38.340584    5745 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:59:38.341617    5745 out.go:298] Setting JSON to false
	I0814 09:59:38.357939    5745 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3535,"bootTime":1723651243,"procs":500,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0814 09:59:38.358011    5745 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0814 09:59:38.362322    5745 out.go:177] * [embed-certs-377000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0814 09:59:38.369268    5745 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 09:59:38.369339    5745 notify.go:220] Checking for updates...
	I0814 09:59:38.376220    5745 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	I0814 09:59:38.379266    5745 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0814 09:59:38.382287    5745 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 09:59:38.386294    5745 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	I0814 09:59:38.390378    5745 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 09:59:38.393654    5745 config.go:182] Loaded profile config "cert-expiration-067000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:59:38.393714    5745 config.go:182] Loaded profile config "multinode-157000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:59:38.393771    5745 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 09:59:38.398234    5745 out.go:177] * Using the qemu2 driver based on user configuration
	I0814 09:59:38.405312    5745 start.go:297] selected driver: qemu2
	I0814 09:59:38.405320    5745 start.go:901] validating driver "qemu2" against <nil>
	I0814 09:59:38.405331    5745 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 09:59:38.407647    5745 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0814 09:59:38.410271    5745 out.go:177] * Automatically selected the socket_vmnet network
	I0814 09:59:38.413413    5745 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 09:59:38.413432    5745 cni.go:84] Creating CNI manager for ""
	I0814 09:59:38.413442    5745 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0814 09:59:38.413447    5745 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0814 09:59:38.413473    5745 start.go:340] cluster config:
	{Name:embed-certs-377000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-377000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 09:59:38.417384    5745 iso.go:125] acquiring lock: {Name:mkda50f549908e1040ce4e19ea38376a2f640f3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:59:38.425270    5745 out.go:177] * Starting "embed-certs-377000" primary control-plane node in "embed-certs-377000" cluster
	I0814 09:59:38.429233    5745 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0814 09:59:38.429247    5745 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0814 09:59:38.429255    5745 cache.go:56] Caching tarball of preloaded images
	I0814 09:59:38.429306    5745 preload.go:172] Found /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0814 09:59:38.429311    5745 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0814 09:59:38.429374    5745 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/embed-certs-377000/config.json ...
	I0814 09:59:38.429384    5745 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/embed-certs-377000/config.json: {Name:mk742aa01e1a0fa74c6aa8052772338fb8151737 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:59:38.429710    5745 start.go:360] acquireMachinesLock for embed-certs-377000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:59:38.429746    5745 start.go:364] duration metric: took 27.666µs to acquireMachinesLock for "embed-certs-377000"
	I0814 09:59:38.429758    5745 start.go:93] Provisioning new machine with config: &{Name:embed-certs-377000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:embed-certs-377000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0814 09:59:38.429790    5745 start.go:125] createHost starting for "" (driver="qemu2")
	I0814 09:59:38.438293    5745 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0814 09:59:38.455846    5745 start.go:159] libmachine.API.Create for "embed-certs-377000" (driver="qemu2")
	I0814 09:59:38.455876    5745 client.go:168] LocalClient.Create starting
	I0814 09:59:38.455939    5745 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem
	I0814 09:59:38.455973    5745 main.go:141] libmachine: Decoding PEM data...
	I0814 09:59:38.455983    5745 main.go:141] libmachine: Parsing certificate...
	I0814 09:59:38.456026    5745 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/cert.pem
	I0814 09:59:38.456050    5745 main.go:141] libmachine: Decoding PEM data...
	I0814 09:59:38.456058    5745 main.go:141] libmachine: Parsing certificate...
	I0814 09:59:38.456502    5745 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19446-1067/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0814 09:59:38.611221    5745 main.go:141] libmachine: Creating SSH key...
	I0814 09:59:38.730249    5745 main.go:141] libmachine: Creating Disk image...
	I0814 09:59:38.730254    5745 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0814 09:59:38.730443    5745 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/embed-certs-377000/disk.qcow2.raw /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/embed-certs-377000/disk.qcow2
	I0814 09:59:38.739927    5745 main.go:141] libmachine: STDOUT: 
	I0814 09:59:38.739953    5745 main.go:141] libmachine: STDERR: 
	I0814 09:59:38.740010    5745 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/embed-certs-377000/disk.qcow2 +20000M
	I0814 09:59:38.747889    5745 main.go:141] libmachine: STDOUT: Image resized.
	
	I0814 09:59:38.747903    5745 main.go:141] libmachine: STDERR: 
	I0814 09:59:38.747926    5745 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/embed-certs-377000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/embed-certs-377000/disk.qcow2
	I0814 09:59:38.747931    5745 main.go:141] libmachine: Starting QEMU VM...
	I0814 09:59:38.747944    5745 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:59:38.747968    5745 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/embed-certs-377000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/embed-certs-377000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/embed-certs-377000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:9e:91:b7:cd:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/embed-certs-377000/disk.qcow2
	I0814 09:59:38.749560    5745 main.go:141] libmachine: STDOUT: 
	I0814 09:59:38.749574    5745 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:59:38.749595    5745 client.go:171] duration metric: took 293.725708ms to LocalClient.Create
	I0814 09:59:40.751722    5745 start.go:128] duration metric: took 2.322012584s to createHost
	I0814 09:59:40.751818    5745 start.go:83] releasing machines lock for "embed-certs-377000", held for 2.3221375s
	W0814 09:59:40.751874    5745 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:59:40.763129    5745 out.go:177] * Deleting "embed-certs-377000" in qemu2 ...
	W0814 09:59:40.794219    5745 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:59:40.794243    5745 start.go:729] Will try again in 5 seconds ...
	I0814 09:59:45.796272    5745 start.go:360] acquireMachinesLock for embed-certs-377000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:59:45.796790    5745 start.go:364] duration metric: took 409.833µs to acquireMachinesLock for "embed-certs-377000"
	I0814 09:59:45.796929    5745 start.go:93] Provisioning new machine with config: &{Name:embed-certs-377000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:embed-certs-377000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0814 09:59:45.797216    5745 start.go:125] createHost starting for "" (driver="qemu2")
	I0814 09:59:45.814945    5745 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0814 09:59:45.867100    5745 start.go:159] libmachine.API.Create for "embed-certs-377000" (driver="qemu2")
	I0814 09:59:45.867152    5745 client.go:168] LocalClient.Create starting
	I0814 09:59:45.867271    5745 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem
	I0814 09:59:45.867335    5745 main.go:141] libmachine: Decoding PEM data...
	I0814 09:59:45.867352    5745 main.go:141] libmachine: Parsing certificate...
	I0814 09:59:45.867421    5745 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/cert.pem
	I0814 09:59:45.867471    5745 main.go:141] libmachine: Decoding PEM data...
	I0814 09:59:45.867486    5745 main.go:141] libmachine: Parsing certificate...
	I0814 09:59:45.868003    5745 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19446-1067/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0814 09:59:46.033086    5745 main.go:141] libmachine: Creating SSH key...
	I0814 09:59:46.263089    5745 main.go:141] libmachine: Creating Disk image...
	I0814 09:59:46.263099    5745 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0814 09:59:46.263297    5745 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/embed-certs-377000/disk.qcow2.raw /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/embed-certs-377000/disk.qcow2
	I0814 09:59:46.272724    5745 main.go:141] libmachine: STDOUT: 
	I0814 09:59:46.272745    5745 main.go:141] libmachine: STDERR: 
	I0814 09:59:46.272791    5745 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/embed-certs-377000/disk.qcow2 +20000M
	I0814 09:59:46.280815    5745 main.go:141] libmachine: STDOUT: Image resized.
	
	I0814 09:59:46.280830    5745 main.go:141] libmachine: STDERR: 
	I0814 09:59:46.280841    5745 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/embed-certs-377000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/embed-certs-377000/disk.qcow2
	I0814 09:59:46.280846    5745 main.go:141] libmachine: Starting QEMU VM...
	I0814 09:59:46.280855    5745 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:59:46.280891    5745 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/embed-certs-377000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/embed-certs-377000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/embed-certs-377000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:99:e1:75:7d:32 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/embed-certs-377000/disk.qcow2
	I0814 09:59:46.282493    5745 main.go:141] libmachine: STDOUT: 
	I0814 09:59:46.282507    5745 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:59:46.282519    5745 client.go:171] duration metric: took 415.378958ms to LocalClient.Create
	I0814 09:59:48.284622    5745 start.go:128] duration metric: took 2.487484125s to createHost
	I0814 09:59:48.284692    5745 start.go:83] releasing machines lock for "embed-certs-377000", held for 2.487985542s
	W0814 09:59:48.285061    5745 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-377000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-377000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:59:48.301721    5745 out.go:177] 
	W0814 09:59:48.305797    5745 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0814 09:59:48.305825    5745 out.go:239] * 
	* 
	W0814 09:59:48.307721    5745 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 09:59:48.315598    5745 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-377000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-377000 -n embed-certs-377000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-377000 -n embed-certs-377000: exit status 7 (67.845709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-377000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-377000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-377000 create -f testdata/busybox.yaml: exit status 1 (29.033ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-377000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-377000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-377000 -n embed-certs-377000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-377000 -n embed-certs-377000: exit status 7 (31.113625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-377000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-377000 -n embed-certs-377000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-377000 -n embed-certs-377000: exit status 7 (30.304ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-377000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-377000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-377000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-377000 describe deploy/metrics-server -n kube-system: exit status 1 (26.447084ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-377000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-377000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-377000 -n embed-certs-377000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-377000 -n embed-certs-377000: exit status 7 (30.138417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-377000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-377000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-377000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.5610585s)

                                                
                                                
-- stdout --
	* [embed-certs-377000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19446
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-377000" primary control-plane node in "embed-certs-377000" cluster
	* Restarting existing qemu2 VM for "embed-certs-377000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-377000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:59:52.050660    5807 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:59:52.050793    5807 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:59:52.050796    5807 out.go:304] Setting ErrFile to fd 2...
	I0814 09:59:52.050799    5807 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:59:52.050935    5807 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:59:52.051899    5807 out.go:298] Setting JSON to false
	I0814 09:59:52.068062    5807 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3549,"bootTime":1723651243,"procs":503,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0814 09:59:52.068140    5807 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0814 09:59:52.071956    5807 out.go:177] * [embed-certs-377000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0814 09:59:52.079866    5807 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 09:59:52.079912    5807 notify.go:220] Checking for updates...
	I0814 09:59:52.086794    5807 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	I0814 09:59:52.089885    5807 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0814 09:59:52.092832    5807 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 09:59:52.095835    5807 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	I0814 09:59:52.098872    5807 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 09:59:52.100482    5807 config.go:182] Loaded profile config "embed-certs-377000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:59:52.100735    5807 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 09:59:52.104848    5807 out.go:177] * Using the qemu2 driver based on existing profile
	I0814 09:59:52.111660    5807 start.go:297] selected driver: qemu2
	I0814 09:59:52.111666    5807 start.go:901] validating driver "qemu2" against &{Name:embed-certs-377000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:embed-certs-377000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 09:59:52.111717    5807 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 09:59:52.114158    5807 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 09:59:52.114200    5807 cni.go:84] Creating CNI manager for ""
	I0814 09:59:52.114210    5807 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0814 09:59:52.114247    5807 start.go:340] cluster config:
	{Name:embed-certs-377000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-377000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 09:59:52.117662    5807 iso.go:125] acquiring lock: {Name:mkda50f549908e1040ce4e19ea38376a2f640f3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:59:52.124859    5807 out.go:177] * Starting "embed-certs-377000" primary control-plane node in "embed-certs-377000" cluster
	I0814 09:59:52.128778    5807 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0814 09:59:52.128793    5807 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0814 09:59:52.128800    5807 cache.go:56] Caching tarball of preloaded images
	I0814 09:59:52.128860    5807 preload.go:172] Found /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0814 09:59:52.128871    5807 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0814 09:59:52.128929    5807 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/embed-certs-377000/config.json ...
	I0814 09:59:52.129383    5807 start.go:360] acquireMachinesLock for embed-certs-377000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:59:52.129411    5807 start.go:364] duration metric: took 21.958µs to acquireMachinesLock for "embed-certs-377000"
	I0814 09:59:52.129420    5807 start.go:96] Skipping create...Using existing machine configuration
	I0814 09:59:52.129424    5807 fix.go:54] fixHost starting: 
	I0814 09:59:52.129533    5807 fix.go:112] recreateIfNeeded on embed-certs-377000: state=Stopped err=<nil>
	W0814 09:59:52.129543    5807 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 09:59:52.133870    5807 out.go:177] * Restarting existing qemu2 VM for "embed-certs-377000" ...
	I0814 09:59:52.137796    5807 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:59:52.137851    5807 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/embed-certs-377000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/embed-certs-377000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/embed-certs-377000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:99:e1:75:7d:32 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/embed-certs-377000/disk.qcow2
	I0814 09:59:52.139654    5807 main.go:141] libmachine: STDOUT: 
	I0814 09:59:52.139674    5807 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:59:52.139699    5807 fix.go:56] duration metric: took 10.273375ms for fixHost
	I0814 09:59:52.139703    5807 start.go:83] releasing machines lock for "embed-certs-377000", held for 10.289208ms
	W0814 09:59:52.139710    5807 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0814 09:59:52.139740    5807 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:59:52.139744    5807 start.go:729] Will try again in 5 seconds ...
	I0814 09:59:57.141823    5807 start.go:360] acquireMachinesLock for embed-certs-377000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:59:57.502908    5807 start.go:364] duration metric: took 360.992167ms to acquireMachinesLock for "embed-certs-377000"
	I0814 09:59:57.503032    5807 start.go:96] Skipping create...Using existing machine configuration
	I0814 09:59:57.503056    5807 fix.go:54] fixHost starting: 
	I0814 09:59:57.503760    5807 fix.go:112] recreateIfNeeded on embed-certs-377000: state=Stopped err=<nil>
	W0814 09:59:57.503790    5807 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 09:59:57.520170    5807 out.go:177] * Restarting existing qemu2 VM for "embed-certs-377000" ...
	I0814 09:59:57.530152    5807 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:59:57.530375    5807 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/embed-certs-377000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/embed-certs-377000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/embed-certs-377000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:99:e1:75:7d:32 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/embed-certs-377000/disk.qcow2
	I0814 09:59:57.539913    5807 main.go:141] libmachine: STDOUT: 
	I0814 09:59:57.539979    5807 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:59:57.540091    5807 fix.go:56] duration metric: took 37.039375ms for fixHost
	I0814 09:59:57.540106    5807 start.go:83] releasing machines lock for "embed-certs-377000", held for 37.157458ms
	W0814 09:59:57.540287    5807 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-377000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-377000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:59:57.549172    5807 out.go:177] 
	W0814 09:59:57.553173    5807 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0814 09:59:57.553193    5807 out.go:239] * 
	* 
	W0814 09:59:57.555175    5807 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 09:59:57.567111    5807 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-377000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-377000 -n embed-certs-377000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-377000 -n embed-certs-377000: exit status 7 (61.071542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-377000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.62s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-969000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-969000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (9.945923709s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-969000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19446
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-969000" primary control-plane node in "default-k8s-diff-port-969000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-969000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:59:55.078827    5827 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:59:55.078980    5827 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:59:55.078984    5827 out.go:304] Setting ErrFile to fd 2...
	I0814 09:59:55.078986    5827 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:59:55.079130    5827 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:59:55.080201    5827 out.go:298] Setting JSON to false
	I0814 09:59:55.096152    5827 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3552,"bootTime":1723651243,"procs":501,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0814 09:59:55.096216    5827 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0814 09:59:55.101059    5827 out.go:177] * [default-k8s-diff-port-969000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0814 09:59:55.108028    5827 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 09:59:55.108091    5827 notify.go:220] Checking for updates...
	I0814 09:59:55.114025    5827 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	I0814 09:59:55.116985    5827 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0814 09:59:55.120094    5827 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 09:59:55.122999    5827 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	I0814 09:59:55.125945    5827 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 09:59:55.129334    5827 config.go:182] Loaded profile config "embed-certs-377000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:59:55.129391    5827 config.go:182] Loaded profile config "multinode-157000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:59:55.129449    5827 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 09:59:55.134012    5827 out.go:177] * Using the qemu2 driver based on user configuration
	I0814 09:59:55.141004    5827 start.go:297] selected driver: qemu2
	I0814 09:59:55.141013    5827 start.go:901] validating driver "qemu2" against <nil>
	I0814 09:59:55.141021    5827 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 09:59:55.143385    5827 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0814 09:59:55.147019    5827 out.go:177] * Automatically selected the socket_vmnet network
	I0814 09:59:55.150103    5827 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 09:59:55.150125    5827 cni.go:84] Creating CNI manager for ""
	I0814 09:59:55.150141    5827 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0814 09:59:55.150146    5827 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0814 09:59:55.150172    5827 start.go:340] cluster config:
	{Name:default-k8s-diff-port-969000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-969000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 09:59:55.153904    5827 iso.go:125] acquiring lock: {Name:mkda50f549908e1040ce4e19ea38376a2f640f3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:59:55.161045    5827 out.go:177] * Starting "default-k8s-diff-port-969000" primary control-plane node in "default-k8s-diff-port-969000" cluster
	I0814 09:59:55.164991    5827 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0814 09:59:55.165009    5827 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0814 09:59:55.165022    5827 cache.go:56] Caching tarball of preloaded images
	I0814 09:59:55.165100    5827 preload.go:172] Found /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0814 09:59:55.165106    5827 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0814 09:59:55.165169    5827 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/default-k8s-diff-port-969000/config.json ...
	I0814 09:59:55.165180    5827 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/default-k8s-diff-port-969000/config.json: {Name:mk6159383355b58c14136b1690ca2aa09fbf2fa2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:59:55.165525    5827 start.go:360] acquireMachinesLock for default-k8s-diff-port-969000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:59:55.165564    5827 start.go:364] duration metric: took 30.167µs to acquireMachinesLock for "default-k8s-diff-port-969000"
	I0814 09:59:55.165577    5827 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-969000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-969000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0814 09:59:55.165610    5827 start.go:125] createHost starting for "" (driver="qemu2")
	I0814 09:59:55.174030    5827 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0814 09:59:55.191855    5827 start.go:159] libmachine.API.Create for "default-k8s-diff-port-969000" (driver="qemu2")
	I0814 09:59:55.191880    5827 client.go:168] LocalClient.Create starting
	I0814 09:59:55.191943    5827 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem
	I0814 09:59:55.191976    5827 main.go:141] libmachine: Decoding PEM data...
	I0814 09:59:55.191986    5827 main.go:141] libmachine: Parsing certificate...
	I0814 09:59:55.192025    5827 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/cert.pem
	I0814 09:59:55.192048    5827 main.go:141] libmachine: Decoding PEM data...
	I0814 09:59:55.192055    5827 main.go:141] libmachine: Parsing certificate...
	I0814 09:59:55.192483    5827 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19446-1067/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0814 09:59:55.341839    5827 main.go:141] libmachine: Creating SSH key...
	I0814 09:59:55.480988    5827 main.go:141] libmachine: Creating Disk image...
	I0814 09:59:55.480996    5827 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0814 09:59:55.481205    5827 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/default-k8s-diff-port-969000/disk.qcow2.raw /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/default-k8s-diff-port-969000/disk.qcow2
	I0814 09:59:55.490853    5827 main.go:141] libmachine: STDOUT: 
	I0814 09:59:55.490871    5827 main.go:141] libmachine: STDERR: 
	I0814 09:59:55.490928    5827 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/default-k8s-diff-port-969000/disk.qcow2 +20000M
	I0814 09:59:55.498829    5827 main.go:141] libmachine: STDOUT: Image resized.
	
	I0814 09:59:55.498844    5827 main.go:141] libmachine: STDERR: 
	I0814 09:59:55.498862    5827 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/default-k8s-diff-port-969000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/default-k8s-diff-port-969000/disk.qcow2
	I0814 09:59:55.498870    5827 main.go:141] libmachine: Starting QEMU VM...
	I0814 09:59:55.498882    5827 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:59:55.498906    5827 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/default-k8s-diff-port-969000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/default-k8s-diff-port-969000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/default-k8s-diff-port-969000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:a6:ab:36:66:79 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/default-k8s-diff-port-969000/disk.qcow2
	I0814 09:59:55.500510    5827 main.go:141] libmachine: STDOUT: 
	I0814 09:59:55.500525    5827 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:59:55.500544    5827 client.go:171] duration metric: took 308.672042ms to LocalClient.Create
	I0814 09:59:57.502625    5827 start.go:128] duration metric: took 2.337100416s to createHost
	I0814 09:59:57.502782    5827 start.go:83] releasing machines lock for "default-k8s-diff-port-969000", held for 2.3372765s
	W0814 09:59:57.502842    5827 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:59:57.526137    5827 out.go:177] * Deleting "default-k8s-diff-port-969000" in qemu2 ...
	W0814 09:59:57.586957    5827 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 09:59:57.586986    5827 start.go:729] Will try again in 5 seconds ...
	I0814 10:00:02.588944    5827 start.go:360] acquireMachinesLock for default-k8s-diff-port-969000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 10:00:02.589323    5827 start.go:364] duration metric: took 298.375µs to acquireMachinesLock for "default-k8s-diff-port-969000"
	I0814 10:00:02.589417    5827 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-969000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-969000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0814 10:00:02.589607    5827 start.go:125] createHost starting for "" (driver="qemu2")
	I0814 10:00:02.604902    5827 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0814 10:00:02.649318    5827 start.go:159] libmachine.API.Create for "default-k8s-diff-port-969000" (driver="qemu2")
	I0814 10:00:02.649372    5827 client.go:168] LocalClient.Create starting
	I0814 10:00:02.649515    5827 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem
	I0814 10:00:02.649590    5827 main.go:141] libmachine: Decoding PEM data...
	I0814 10:00:02.649608    5827 main.go:141] libmachine: Parsing certificate...
	I0814 10:00:02.649675    5827 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/cert.pem
	I0814 10:00:02.649727    5827 main.go:141] libmachine: Decoding PEM data...
	I0814 10:00:02.649746    5827 main.go:141] libmachine: Parsing certificate...
	I0814 10:00:02.650346    5827 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19446-1067/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0814 10:00:02.818999    5827 main.go:141] libmachine: Creating SSH key...
	I0814 10:00:02.932566    5827 main.go:141] libmachine: Creating Disk image...
	I0814 10:00:02.932571    5827 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0814 10:00:02.932740    5827 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/default-k8s-diff-port-969000/disk.qcow2.raw /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/default-k8s-diff-port-969000/disk.qcow2
	I0814 10:00:02.941863    5827 main.go:141] libmachine: STDOUT: 
	I0814 10:00:02.941882    5827 main.go:141] libmachine: STDERR: 
	I0814 10:00:02.941941    5827 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/default-k8s-diff-port-969000/disk.qcow2 +20000M
	I0814 10:00:02.949916    5827 main.go:141] libmachine: STDOUT: Image resized.
	
	I0814 10:00:02.949936    5827 main.go:141] libmachine: STDERR: 
	I0814 10:00:02.949949    5827 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/default-k8s-diff-port-969000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/default-k8s-diff-port-969000/disk.qcow2
	I0814 10:00:02.949954    5827 main.go:141] libmachine: Starting QEMU VM...
	I0814 10:00:02.949962    5827 qemu.go:418] Using hvf for hardware acceleration
	I0814 10:00:02.949987    5827 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/default-k8s-diff-port-969000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/default-k8s-diff-port-969000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/default-k8s-diff-port-969000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:5c:8f:ad:02:c4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/default-k8s-diff-port-969000/disk.qcow2
	I0814 10:00:02.951809    5827 main.go:141] libmachine: STDOUT: 
	I0814 10:00:02.951826    5827 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 10:00:02.951843    5827 client.go:171] duration metric: took 302.477209ms to LocalClient.Create
	I0814 10:00:04.953888    5827 start.go:128] duration metric: took 2.364366625s to createHost
	I0814 10:00:04.953962    5827 start.go:83] releasing machines lock for "default-k8s-diff-port-969000", held for 2.364712459s
	W0814 10:00:04.954196    5827 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-969000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-969000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 10:00:04.963647    5827 out.go:177] 
	W0814 10:00:04.971863    5827 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0814 10:00:04.971898    5827 out.go:239] * 
	* 
	W0814 10:00:04.973402    5827 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 10:00:04.984594    5827 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-969000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-969000 -n default-k8s-diff-port-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-969000 -n default-k8s-diff-port-969000: exit status 7 (64.258042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-377000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-377000 -n embed-certs-377000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-377000 -n embed-certs-377000: exit status 7 (31.0515ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-377000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-377000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-377000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-377000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.659291ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-377000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-377000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-377000 -n embed-certs-377000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-377000 -n embed-certs-377000: exit status 7 (30.208084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-377000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-377000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-377000 -n embed-certs-377000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-377000 -n embed-certs-377000: exit status 7 (29.533791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-377000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-377000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-377000 --alsologtostderr -v=1: exit status 83 (45.052583ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-377000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-377000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:59:57.832080    5849 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:59:57.832212    5849 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:59:57.832215    5849 out.go:304] Setting ErrFile to fd 2...
	I0814 09:59:57.832217    5849 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:59:57.832354    5849 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:59:57.832581    5849 out.go:298] Setting JSON to false
	I0814 09:59:57.832591    5849 mustload.go:65] Loading cluster: embed-certs-377000
	I0814 09:59:57.832778    5849 config.go:182] Loaded profile config "embed-certs-377000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:59:57.836995    5849 out.go:177] * The control-plane node embed-certs-377000 host is not running: state=Stopped
	I0814 09:59:57.845326    5849 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-377000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-377000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-377000 -n embed-certs-377000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-377000 -n embed-certs-377000: exit status 7 (28.941209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-377000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-377000 -n embed-certs-377000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-377000 -n embed-certs-377000: exit status 7 (28.661084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-377000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-158000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-158000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (9.872656667s)

                                                
                                                
-- stdout --
	* [newest-cni-158000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19446
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-158000" primary control-plane node in "newest-cni-158000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-158000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:59:58.143445    5866 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:59:58.143676    5866 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:59:58.143684    5866 out.go:304] Setting ErrFile to fd 2...
	I0814 09:59:58.143686    5866 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:59:58.143852    5866 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:59:58.144964    5866 out.go:298] Setting JSON to false
	I0814 09:59:58.161346    5866 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3555,"bootTime":1723651243,"procs":501,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0814 09:59:58.161436    5866 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0814 09:59:58.166200    5866 out.go:177] * [newest-cni-158000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0814 09:59:58.171086    5866 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 09:59:58.171128    5866 notify.go:220] Checking for updates...
	I0814 09:59:58.178084    5866 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	I0814 09:59:58.181108    5866 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0814 09:59:58.184119    5866 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 09:59:58.185592    5866 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	I0814 09:59:58.189127    5866 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 09:59:58.192451    5866 config.go:182] Loaded profile config "default-k8s-diff-port-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:59:58.192510    5866 config.go:182] Loaded profile config "multinode-157000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:59:58.192560    5866 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 09:59:58.196964    5866 out.go:177] * Using the qemu2 driver based on user configuration
	I0814 09:59:58.204127    5866 start.go:297] selected driver: qemu2
	I0814 09:59:58.204136    5866 start.go:901] validating driver "qemu2" against <nil>
	I0814 09:59:58.204148    5866 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 09:59:58.206543    5866 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0814 09:59:58.206568    5866 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0814 09:59:58.215075    5866 out.go:177] * Automatically selected the socket_vmnet network
	I0814 09:59:58.218181    5866 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0814 09:59:58.218217    5866 cni.go:84] Creating CNI manager for ""
	I0814 09:59:58.218228    5866 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0814 09:59:58.218236    5866 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0814 09:59:58.218265    5866 start.go:340] cluster config:
	{Name:newest-cni-158000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-158000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 09:59:58.222244    5866 iso.go:125] acquiring lock: {Name:mkda50f549908e1040ce4e19ea38376a2f640f3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:59:58.230075    5866 out.go:177] * Starting "newest-cni-158000" primary control-plane node in "newest-cni-158000" cluster
	I0814 09:59:58.234112    5866 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0814 09:59:58.234135    5866 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0814 09:59:58.234144    5866 cache.go:56] Caching tarball of preloaded images
	I0814 09:59:58.234216    5866 preload.go:172] Found /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0814 09:59:58.234225    5866 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0814 09:59:58.234291    5866 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/newest-cni-158000/config.json ...
	I0814 09:59:58.234303    5866 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/newest-cni-158000/config.json: {Name:mk73a047b4f0daff7560c7e7107057f16b2a5b2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:59:58.234547    5866 start.go:360] acquireMachinesLock for newest-cni-158000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 09:59:58.234591    5866 start.go:364] duration metric: took 33.459µs to acquireMachinesLock for "newest-cni-158000"
	I0814 09:59:58.234605    5866 start.go:93] Provisioning new machine with config: &{Name:newest-cni-158000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:newest-cni-158000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0814 09:59:58.234659    5866 start.go:125] createHost starting for "" (driver="qemu2")
	I0814 09:59:58.242131    5866 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0814 09:59:58.260197    5866 start.go:159] libmachine.API.Create for "newest-cni-158000" (driver="qemu2")
	I0814 09:59:58.260223    5866 client.go:168] LocalClient.Create starting
	I0814 09:59:58.260288    5866 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem
	I0814 09:59:58.260319    5866 main.go:141] libmachine: Decoding PEM data...
	I0814 09:59:58.260328    5866 main.go:141] libmachine: Parsing certificate...
	I0814 09:59:58.260363    5866 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/cert.pem
	I0814 09:59:58.260385    5866 main.go:141] libmachine: Decoding PEM data...
	I0814 09:59:58.260391    5866 main.go:141] libmachine: Parsing certificate...
	I0814 09:59:58.260742    5866 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19446-1067/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0814 09:59:58.413259    5866 main.go:141] libmachine: Creating SSH key...
	I0814 09:59:58.517347    5866 main.go:141] libmachine: Creating Disk image...
	I0814 09:59:58.517352    5866 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0814 09:59:58.517531    5866 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/newest-cni-158000/disk.qcow2.raw /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/newest-cni-158000/disk.qcow2
	I0814 09:59:58.527133    5866 main.go:141] libmachine: STDOUT: 
	I0814 09:59:58.527154    5866 main.go:141] libmachine: STDERR: 
	I0814 09:59:58.527208    5866 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/newest-cni-158000/disk.qcow2 +20000M
	I0814 09:59:58.535075    5866 main.go:141] libmachine: STDOUT: Image resized.
	
	I0814 09:59:58.535090    5866 main.go:141] libmachine: STDERR: 
	I0814 09:59:58.535110    5866 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/newest-cni-158000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/newest-cni-158000/disk.qcow2
	I0814 09:59:58.535116    5866 main.go:141] libmachine: Starting QEMU VM...
	I0814 09:59:58.535129    5866 qemu.go:418] Using hvf for hardware acceleration
	I0814 09:59:58.535165    5866 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/newest-cni-158000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/newest-cni-158000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/newest-cni-158000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:5d:f0:17:10:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/newest-cni-158000/disk.qcow2
	I0814 09:59:58.536762    5866 main.go:141] libmachine: STDOUT: 
	I0814 09:59:58.536786    5866 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 09:59:58.536804    5866 client.go:171] duration metric: took 276.58825ms to LocalClient.Create
	I0814 10:00:00.538927    5866 start.go:128] duration metric: took 2.304334334s to createHost
	I0814 10:00:00.538993    5866 start.go:83] releasing machines lock for "newest-cni-158000", held for 2.304493834s
	W0814 10:00:00.539048    5866 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 10:00:00.550195    5866 out.go:177] * Deleting "newest-cni-158000" in qemu2 ...
	W0814 10:00:00.587638    5866 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 10:00:00.587663    5866 start.go:729] Will try again in 5 seconds ...
	I0814 10:00:05.589664    5866 start.go:360] acquireMachinesLock for newest-cni-158000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 10:00:05.590129    5866 start.go:364] duration metric: took 289.083µs to acquireMachinesLock for "newest-cni-158000"
	I0814 10:00:05.590251    5866 start.go:93] Provisioning new machine with config: &{Name:newest-cni-158000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:newest-cni-158000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0814 10:00:05.590491    5866 start.go:125] createHost starting for "" (driver="qemu2")
	I0814 10:00:05.596110    5866 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0814 10:00:05.645902    5866 start.go:159] libmachine.API.Create for "newest-cni-158000" (driver="qemu2")
	I0814 10:00:05.645946    5866 client.go:168] LocalClient.Create starting
	I0814 10:00:05.646073    5866 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/ca.pem
	I0814 10:00:05.646126    5866 main.go:141] libmachine: Decoding PEM data...
	I0814 10:00:05.646143    5866 main.go:141] libmachine: Parsing certificate...
	I0814 10:00:05.646211    5866 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19446-1067/.minikube/certs/cert.pem
	I0814 10:00:05.646258    5866 main.go:141] libmachine: Decoding PEM data...
	I0814 10:00:05.646269    5866 main.go:141] libmachine: Parsing certificate...
	I0814 10:00:05.646919    5866 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19446-1067/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso...
	I0814 10:00:05.822270    5866 main.go:141] libmachine: Creating SSH key...
	I0814 10:00:05.918448    5866 main.go:141] libmachine: Creating Disk image...
	I0814 10:00:05.918460    5866 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0814 10:00:05.918664    5866 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/newest-cni-158000/disk.qcow2.raw /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/newest-cni-158000/disk.qcow2
	I0814 10:00:05.927927    5866 main.go:141] libmachine: STDOUT: 
	I0814 10:00:05.927944    5866 main.go:141] libmachine: STDERR: 
	I0814 10:00:05.927986    5866 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/newest-cni-158000/disk.qcow2 +20000M
	I0814 10:00:05.935920    5866 main.go:141] libmachine: STDOUT: Image resized.
	
	I0814 10:00:05.935938    5866 main.go:141] libmachine: STDERR: 
	I0814 10:00:05.935948    5866 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/newest-cni-158000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/newest-cni-158000/disk.qcow2
	I0814 10:00:05.935953    5866 main.go:141] libmachine: Starting QEMU VM...
	I0814 10:00:05.935962    5866 qemu.go:418] Using hvf for hardware acceleration
	I0814 10:00:05.935991    5866 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/newest-cni-158000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/newest-cni-158000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/newest-cni-158000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:b4:58:75:01:38 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/newest-cni-158000/disk.qcow2
	I0814 10:00:05.937617    5866 main.go:141] libmachine: STDOUT: 
	I0814 10:00:05.937632    5866 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 10:00:05.937646    5866 client.go:171] duration metric: took 291.704834ms to LocalClient.Create
	I0814 10:00:07.939759    5866 start.go:128] duration metric: took 2.349336959s to createHost
	I0814 10:00:07.939826    5866 start.go:83] releasing machines lock for "newest-cni-158000", held for 2.349772834s
	W0814 10:00:07.940278    5866 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-158000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-158000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 10:00:07.953891    5866 out.go:177] 
	W0814 10:00:07.961983    5866 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0814 10:00:07.962028    5866 out.go:239] * 
	* 
	W0814 10:00:07.964849    5866 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 10:00:07.971828    5866 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-158000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-158000 -n newest-cni-158000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-158000 -n newest-cni-158000: exit status 7 (62.534208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-158000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-969000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-969000 create -f testdata/busybox.yaml: exit status 1 (31.429917ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-969000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-969000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-969000 -n default-k8s-diff-port-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-969000 -n default-k8s-diff-port-969000: exit status 7 (27.777084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-969000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-969000 -n default-k8s-diff-port-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-969000 -n default-k8s-diff-port-969000: exit status 7 (27.448125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-969000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-969000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-969000 describe deploy/metrics-server -n kube-system: exit status 1 (25.886ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-969000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-969000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-969000 -n default-k8s-diff-port-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-969000 -n default-k8s-diff-port-969000: exit status 7 (28.607917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-969000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-969000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.744520959s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-969000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19446
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-969000" primary control-plane node in "default-k8s-diff-port-969000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-969000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-969000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 10:00:07.317380    6214 out.go:291] Setting OutFile to fd 1 ...
	I0814 10:00:07.317502    6214 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 10:00:07.317506    6214 out.go:304] Setting ErrFile to fd 2...
	I0814 10:00:07.317508    6214 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 10:00:07.317629    6214 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 10:00:07.318598    6214 out.go:298] Setting JSON to false
	I0814 10:00:07.334817    6214 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3564,"bootTime":1723651243,"procs":503,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0814 10:00:07.334886    6214 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0814 10:00:07.340121    6214 out.go:177] * [default-k8s-diff-port-969000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0814 10:00:07.347132    6214 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 10:00:07.347196    6214 notify.go:220] Checking for updates...
	I0814 10:00:07.353082    6214 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	I0814 10:00:07.356123    6214 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0814 10:00:07.359126    6214 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 10:00:07.362073    6214 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	I0814 10:00:07.365184    6214 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 10:00:07.368358    6214 config.go:182] Loaded profile config "default-k8s-diff-port-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 10:00:07.368627    6214 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 10:00:07.373071    6214 out.go:177] * Using the qemu2 driver based on existing profile
	I0814 10:00:07.378993    6214 start.go:297] selected driver: qemu2
	I0814 10:00:07.379000    6214 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-969000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-969000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 10:00:07.379053    6214 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 10:00:07.381532    6214 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 10:00:07.381562    6214 cni.go:84] Creating CNI manager for ""
	I0814 10:00:07.381570    6214 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0814 10:00:07.381592    6214 start.go:340] cluster config:
	{Name:default-k8s-diff-port-969000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-969000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 10:00:07.385130    6214 iso.go:125] acquiring lock: {Name:mkda50f549908e1040ce4e19ea38376a2f640f3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 10:00:07.393126    6214 out.go:177] * Starting "default-k8s-diff-port-969000" primary control-plane node in "default-k8s-diff-port-969000" cluster
	I0814 10:00:07.397073    6214 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0814 10:00:07.397102    6214 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0814 10:00:07.397114    6214 cache.go:56] Caching tarball of preloaded images
	I0814 10:00:07.397186    6214 preload.go:172] Found /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0814 10:00:07.397195    6214 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0814 10:00:07.397273    6214 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/default-k8s-diff-port-969000/config.json ...
	I0814 10:00:07.397649    6214 start.go:360] acquireMachinesLock for default-k8s-diff-port-969000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 10:00:07.939968    6214 start.go:364] duration metric: took 542.318625ms to acquireMachinesLock for "default-k8s-diff-port-969000"
	I0814 10:00:07.940132    6214 start.go:96] Skipping create...Using existing machine configuration
	I0814 10:00:07.940165    6214 fix.go:54] fixHost starting: 
	I0814 10:00:07.940951    6214 fix.go:112] recreateIfNeeded on default-k8s-diff-port-969000: state=Stopped err=<nil>
	W0814 10:00:07.940998    6214 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 10:00:07.957853    6214 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-969000" ...
	I0814 10:00:07.964958    6214 qemu.go:418] Using hvf for hardware acceleration
	I0814 10:00:07.965160    6214 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/default-k8s-diff-port-969000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/default-k8s-diff-port-969000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/default-k8s-diff-port-969000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:5c:8f:ad:02:c4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/default-k8s-diff-port-969000/disk.qcow2
	I0814 10:00:07.974271    6214 main.go:141] libmachine: STDOUT: 
	I0814 10:00:07.974354    6214 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 10:00:07.974490    6214 fix.go:56] duration metric: took 34.316041ms for fixHost
	I0814 10:00:07.974510    6214 start.go:83] releasing machines lock for "default-k8s-diff-port-969000", held for 34.492583ms
	W0814 10:00:07.974538    6214 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0814 10:00:07.974681    6214 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 10:00:07.974697    6214 start.go:729] Will try again in 5 seconds ...
	I0814 10:00:12.976315    6214 start.go:360] acquireMachinesLock for default-k8s-diff-port-969000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 10:00:12.976603    6214 start.go:364] duration metric: took 172.042µs to acquireMachinesLock for "default-k8s-diff-port-969000"
	I0814 10:00:12.976691    6214 start.go:96] Skipping create...Using existing machine configuration
	I0814 10:00:12.976708    6214 fix.go:54] fixHost starting: 
	I0814 10:00:12.977192    6214 fix.go:112] recreateIfNeeded on default-k8s-diff-port-969000: state=Stopped err=<nil>
	W0814 10:00:12.977210    6214 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 10:00:12.986633    6214 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-969000" ...
	I0814 10:00:12.989672    6214 qemu.go:418] Using hvf for hardware acceleration
	I0814 10:00:12.989935    6214 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/default-k8s-diff-port-969000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/default-k8s-diff-port-969000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/default-k8s-diff-port-969000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:5c:8f:ad:02:c4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/default-k8s-diff-port-969000/disk.qcow2
	I0814 10:00:12.998870    6214 main.go:141] libmachine: STDOUT: 
	I0814 10:00:12.998944    6214 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 10:00:12.999084    6214 fix.go:56] duration metric: took 22.378708ms for fixHost
	I0814 10:00:12.999109    6214 start.go:83] releasing machines lock for "default-k8s-diff-port-969000", held for 22.488208ms
	W0814 10:00:12.999290    6214 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-969000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-969000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 10:00:13.006625    6214 out.go:177] 
	W0814 10:00:13.010713    6214 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0814 10:00:13.010767    6214 out.go:239] * 
	* 
	W0814 10:00:13.013085    6214 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 10:00:13.020598    6214 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-969000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-969000 -n default-k8s-diff-port-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-969000 -n default-k8s-diff-port-969000: exit status 7 (68.420875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.81s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-158000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-158000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.185038708s)

                                                
                                                
-- stdout --
	* [newest-cni-158000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19446
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-158000" primary control-plane node in "newest-cni-158000" cluster
	* Restarting existing qemu2 VM for "newest-cni-158000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-158000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 10:00:11.407944    6247 out.go:291] Setting OutFile to fd 1 ...
	I0814 10:00:11.408087    6247 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 10:00:11.408090    6247 out.go:304] Setting ErrFile to fd 2...
	I0814 10:00:11.408093    6247 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 10:00:11.408222    6247 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 10:00:11.409233    6247 out.go:298] Setting JSON to false
	I0814 10:00:11.425233    6247 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3568,"bootTime":1723651243,"procs":503,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0814 10:00:11.425310    6247 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0814 10:00:11.430762    6247 out.go:177] * [newest-cni-158000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0814 10:00:11.437714    6247 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 10:00:11.437738    6247 notify.go:220] Checking for updates...
	I0814 10:00:11.444673    6247 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	I0814 10:00:11.447674    6247 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0814 10:00:11.450722    6247 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 10:00:11.453648    6247 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	I0814 10:00:11.456707    6247 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 10:00:11.460031    6247 config.go:182] Loaded profile config "newest-cni-158000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 10:00:11.460299    6247 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 10:00:11.463635    6247 out.go:177] * Using the qemu2 driver based on existing profile
	I0814 10:00:11.470692    6247 start.go:297] selected driver: qemu2
	I0814 10:00:11.470702    6247 start.go:901] validating driver "qemu2" against &{Name:newest-cni-158000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:newest-cni-158000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 10:00:11.470774    6247 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 10:00:11.473061    6247 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0814 10:00:11.473109    6247 cni.go:84] Creating CNI manager for ""
	I0814 10:00:11.473117    6247 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0814 10:00:11.473146    6247 start.go:340] cluster config:
	{Name:newest-cni-158000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-158000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 10:00:11.476653    6247 iso.go:125] acquiring lock: {Name:mkda50f549908e1040ce4e19ea38376a2f640f3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 10:00:11.483650    6247 out.go:177] * Starting "newest-cni-158000" primary control-plane node in "newest-cni-158000" cluster
	I0814 10:00:11.487536    6247 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0814 10:00:11.487549    6247 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0814 10:00:11.487559    6247 cache.go:56] Caching tarball of preloaded images
	I0814 10:00:11.487606    6247 preload.go:172] Found /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0814 10:00:11.487611    6247 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0814 10:00:11.487675    6247 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/newest-cni-158000/config.json ...
	I0814 10:00:11.488077    6247 start.go:360] acquireMachinesLock for newest-cni-158000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 10:00:11.488104    6247 start.go:364] duration metric: took 20.75µs to acquireMachinesLock for "newest-cni-158000"
	I0814 10:00:11.488113    6247 start.go:96] Skipping create...Using existing machine configuration
	I0814 10:00:11.488119    6247 fix.go:54] fixHost starting: 
	I0814 10:00:11.488232    6247 fix.go:112] recreateIfNeeded on newest-cni-158000: state=Stopped err=<nil>
	W0814 10:00:11.488240    6247 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 10:00:11.492727    6247 out.go:177] * Restarting existing qemu2 VM for "newest-cni-158000" ...
	I0814 10:00:11.500669    6247 qemu.go:418] Using hvf for hardware acceleration
	I0814 10:00:11.500711    6247 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/newest-cni-158000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/newest-cni-158000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/newest-cni-158000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:b4:58:75:01:38 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/newest-cni-158000/disk.qcow2
	I0814 10:00:11.502655    6247 main.go:141] libmachine: STDOUT: 
	I0814 10:00:11.502674    6247 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 10:00:11.502699    6247 fix.go:56] duration metric: took 14.577917ms for fixHost
	I0814 10:00:11.502703    6247 start.go:83] releasing machines lock for "newest-cni-158000", held for 14.595416ms
	W0814 10:00:11.502709    6247 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0814 10:00:11.502751    6247 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 10:00:11.502756    6247 start.go:729] Will try again in 5 seconds ...
	I0814 10:00:16.504729    6247 start.go:360] acquireMachinesLock for newest-cni-158000: {Name:mkec63da22ca76b547e0b728b694944edcfd432c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 10:00:16.505230    6247 start.go:364] duration metric: took 350.083µs to acquireMachinesLock for "newest-cni-158000"
	I0814 10:00:16.505358    6247 start.go:96] Skipping create...Using existing machine configuration
	I0814 10:00:16.505377    6247 fix.go:54] fixHost starting: 
	I0814 10:00:16.506179    6247 fix.go:112] recreateIfNeeded on newest-cni-158000: state=Stopped err=<nil>
	W0814 10:00:16.506205    6247 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 10:00:16.514571    6247 out.go:177] * Restarting existing qemu2 VM for "newest-cni-158000" ...
	I0814 10:00:16.518535    6247 qemu.go:418] Using hvf for hardware acceleration
	I0814 10:00:16.518806    6247 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/newest-cni-158000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/newest-cni-158000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/newest-cni-158000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:b4:58:75:01:38 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19446-1067/.minikube/machines/newest-cni-158000/disk.qcow2
	I0814 10:00:16.528280    6247 main.go:141] libmachine: STDOUT: 
	I0814 10:00:16.528353    6247 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0814 10:00:16.528443    6247 fix.go:56] duration metric: took 23.065208ms for fixHost
	I0814 10:00:16.528462    6247 start.go:83] releasing machines lock for "newest-cni-158000", held for 23.208416ms
	W0814 10:00:16.528681    6247 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-158000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-158000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0814 10:00:16.537640    6247 out.go:177] 
	W0814 10:00:16.541790    6247 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0814 10:00:16.541831    6247 out.go:239] * 
	* 
	W0814 10:00:16.544502    6247 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 10:00:16.552572    6247 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-158000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-158000 -n newest-cni-158000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-158000 -n newest-cni-158000: exit status 7 (66.660041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-158000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-969000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-969000 -n default-k8s-diff-port-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-969000 -n default-k8s-diff-port-969000: exit status 7 (32.110041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-969000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-969000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-969000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.60325ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-969000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-969000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-969000 -n default-k8s-diff-port-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-969000 -n default-k8s-diff-port-969000: exit status 7 (29.668416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-969000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-969000 -n default-k8s-diff-port-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-969000 -n default-k8s-diff-port-969000: exit status 7 (28.783583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-969000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-969000 --alsologtostderr -v=1: exit status 83 (39.626084ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-969000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-969000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 10:00:13.290627    6268 out.go:291] Setting OutFile to fd 1 ...
	I0814 10:00:13.290787    6268 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 10:00:13.290790    6268 out.go:304] Setting ErrFile to fd 2...
	I0814 10:00:13.290792    6268 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 10:00:13.290912    6268 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 10:00:13.291146    6268 out.go:298] Setting JSON to false
	I0814 10:00:13.291155    6268 mustload.go:65] Loading cluster: default-k8s-diff-port-969000
	I0814 10:00:13.291341    6268 config.go:182] Loaded profile config "default-k8s-diff-port-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 10:00:13.295617    6268 out.go:177] * The control-plane node default-k8s-diff-port-969000 host is not running: state=Stopped
	I0814 10:00:13.299628    6268 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-969000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-969000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-969000 -n default-k8s-diff-port-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-969000 -n default-k8s-diff-port-969000: exit status 7 (29.066583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-969000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-969000 -n default-k8s-diff-port-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-969000 -n default-k8s-diff-port-969000: exit status 7 (28.69575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-158000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-158000 -n newest-cni-158000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-158000 -n newest-cni-158000: exit status 7 (30.881458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-158000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-158000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-158000 --alsologtostderr -v=1: exit status 83 (42.684292ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-158000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-158000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 10:00:16.737032    6292 out.go:291] Setting OutFile to fd 1 ...
	I0814 10:00:16.737185    6292 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 10:00:16.737189    6292 out.go:304] Setting ErrFile to fd 2...
	I0814 10:00:16.737191    6292 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 10:00:16.737320    6292 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 10:00:16.737549    6292 out.go:298] Setting JSON to false
	I0814 10:00:16.737558    6292 mustload.go:65] Loading cluster: newest-cni-158000
	I0814 10:00:16.737741    6292 config.go:182] Loaded profile config "newest-cni-158000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 10:00:16.742365    6292 out.go:177] * The control-plane node newest-cni-158000 host is not running: state=Stopped
	I0814 10:00:16.747316    6292 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-158000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-158000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-158000 -n newest-cni-158000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-158000 -n newest-cni-158000: exit status 7 (30.071292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-158000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-158000 -n newest-cni-158000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-158000 -n newest-cni-158000: exit status 7 (29.917458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-158000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (156/274)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.11
12 TestDownloadOnly/v1.31.0/json-events 12.66
13 TestDownloadOnly/v1.31.0/preload-exists 0
16 TestDownloadOnly/v1.31.0/kubectl 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.08
18 TestDownloadOnly/v1.31.0/DeleteAll 0.1
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.1
21 TestBinaryMirror 0.3
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 141.12
29 TestAddons/serial/Volcano 39.18
31 TestAddons/serial/GCPAuth/Namespaces 0.1
33 TestAddons/parallel/Registry 14.46
34 TestAddons/parallel/Ingress 18.5
35 TestAddons/parallel/InspektorGadget 11.26
36 TestAddons/parallel/MetricsServer 5.28
39 TestAddons/parallel/CSI 48.54
40 TestAddons/parallel/Headlamp 17.65
41 TestAddons/parallel/CloudSpanner 5.17
42 TestAddons/parallel/LocalPath 40.86
43 TestAddons/parallel/NvidiaDevicePlugin 6.15
44 TestAddons/parallel/Yakd 10.2
45 TestAddons/StoppedEnableDisable 12.43
53 TestHyperKitDriverInstallOrUpdate 11.17
56 TestErrorSpam/setup 35.37
57 TestErrorSpam/start 0.33
58 TestErrorSpam/status 0.26
59 TestErrorSpam/pause 0.7
60 TestErrorSpam/unpause 0.64
61 TestErrorSpam/stop 64.28
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 45.57
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 36.11
68 TestFunctional/serial/KubeContext 0.03
69 TestFunctional/serial/KubectlGetPods 0.04
72 TestFunctional/serial/CacheCmd/cache/add_remote 2.81
73 TestFunctional/serial/CacheCmd/cache/add_local 1.19
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.03
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.08
77 TestFunctional/serial/CacheCmd/cache/cache_reload 0.65
78 TestFunctional/serial/CacheCmd/cache/delete 0.07
79 TestFunctional/serial/MinikubeKubectlCmd 0.85
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.02
81 TestFunctional/serial/ExtraConfig 57.36
82 TestFunctional/serial/ComponentHealth 0.05
83 TestFunctional/serial/LogsCmd 0.66
84 TestFunctional/serial/LogsFileCmd 0.67
85 TestFunctional/serial/InvalidService 4.47
87 TestFunctional/parallel/ConfigCmd 0.24
88 TestFunctional/parallel/DashboardCmd 6.59
89 TestFunctional/parallel/DryRun 0.23
90 TestFunctional/parallel/InternationalLanguage 0.11
91 TestFunctional/parallel/StatusCmd 0.26
96 TestFunctional/parallel/AddonsCmd 0.09
97 TestFunctional/parallel/PersistentVolumeClaim 25.6
99 TestFunctional/parallel/SSHCmd 0.13
100 TestFunctional/parallel/CpCmd 1.78
102 TestFunctional/parallel/FileSync 0.07
103 TestFunctional/parallel/CertSync 0.47
107 TestFunctional/parallel/NodeLabels 0.05
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.06
111 TestFunctional/parallel/License 0.26
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.22
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.1
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
119 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
120 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
121 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
123 TestFunctional/parallel/ServiceCmd/DeployApp 6.09
124 TestFunctional/parallel/ServiceCmd/List 0.32
125 TestFunctional/parallel/ServiceCmd/JSONOutput 0.3
126 TestFunctional/parallel/ServiceCmd/HTTPS 0.12
127 TestFunctional/parallel/ServiceCmd/Format 0.11
128 TestFunctional/parallel/ServiceCmd/URL 0.1
129 TestFunctional/parallel/ProfileCmd/profile_not_create 0.13
130 TestFunctional/parallel/ProfileCmd/profile_list 0.13
131 TestFunctional/parallel/ProfileCmd/profile_json_output 0.12
132 TestFunctional/parallel/MountCmd/any-port 7.13
133 TestFunctional/parallel/MountCmd/specific-port 0.78
134 TestFunctional/parallel/MountCmd/VerifyCleanup 1.08
135 TestFunctional/parallel/Version/short 0.04
136 TestFunctional/parallel/Version/components 0.26
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.19
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.08
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.08
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.19
141 TestFunctional/parallel/ImageCommands/ImageBuild 1.91
142 TestFunctional/parallel/ImageCommands/Setup 1.7
143 TestFunctional/parallel/DockerEnv/bash 0.3
144 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
145 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.06
146 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.06
147 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.45
148 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.37
149 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.21
150 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.13
151 TestFunctional/parallel/ImageCommands/ImageRemove 0.16
152 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.21
153 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.17
154 TestFunctional/delete_echo-server_images 0.03
155 TestFunctional/delete_my-image_image 0.01
156 TestFunctional/delete_minikube_cached_images 0.01
160 TestMultiControlPlane/serial/StartCluster 187.12
161 TestMultiControlPlane/serial/DeployApp 4.13
162 TestMultiControlPlane/serial/PingHostFromPods 0.72
163 TestMultiControlPlane/serial/AddWorkerNode 54.97
164 TestMultiControlPlane/serial/NodeLabels 0.15
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.23
166 TestMultiControlPlane/serial/CopyFile 4.06
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 78.32
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.05
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 1.82
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.2
212 TestMainNoArgs 0.04
257 TestStoppedBinaryUpgrade/Setup 1.85
259 TestStoppedBinaryUpgrade/MinikubeLogs 0.77
270 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
276 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
277 TestNoKubernetes/serial/ProfileList 0.1
278 TestNoKubernetes/serial/Stop 2.72
280 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
294 TestStartStop/group/old-k8s-version/serial/Stop 3.59
295 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.13
305 TestStartStop/group/no-preload/serial/Stop 3.76
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
316 TestStartStop/group/embed-certs/serial/Stop 3.3
317 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
329 TestStartStop/group/default-k8s-diff-port/serial/Stop 1.91
330 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
332 TestStartStop/group/newest-cni/serial/DeployApp 0
333 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
334 TestStartStop/group/newest-cni/serial/Stop 3.15
335 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-622000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-622000: exit status 85 (93.65975ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-622000 | jenkins | v1.33.1 | 14 Aug 24 09:09 PDT |          |
	|         | -p download-only-622000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/14 09:09:06
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 09:09:06.011850    1602 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:09:06.012023    1602 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:09:06.012026    1602 out.go:304] Setting ErrFile to fd 2...
	I0814 09:09:06.012029    1602 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:09:06.012169    1602 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	W0814 09:09:06.012253    1602 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19446-1067/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19446-1067/.minikube/config/config.json: no such file or directory
	I0814 09:09:06.013521    1602 out.go:298] Setting JSON to true
	I0814 09:09:06.031808    1602 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":503,"bootTime":1723651243,"procs":504,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0814 09:09:06.031880    1602 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0814 09:09:06.039466    1602 out.go:97] [download-only-622000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0814 09:09:06.039634    1602 notify.go:220] Checking for updates...
	W0814 09:09:06.039682    1602 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball: no such file or directory
	I0814 09:09:06.043425    1602 out.go:169] MINIKUBE_LOCATION=19446
	I0814 09:09:06.050465    1602 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	I0814 09:09:06.056503    1602 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0814 09:09:06.060391    1602 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 09:09:06.063434    1602 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	W0814 09:09:06.069438    1602 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0814 09:09:06.069682    1602 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 09:09:06.074439    1602 out.go:97] Using the qemu2 driver based on user configuration
	I0814 09:09:06.074470    1602 start.go:297] selected driver: qemu2
	I0814 09:09:06.074493    1602 start.go:901] validating driver "qemu2" against <nil>
	I0814 09:09:06.074607    1602 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0814 09:09:06.078469    1602 out.go:169] Automatically selected the socket_vmnet network
	I0814 09:09:06.085218    1602 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0814 09:09:06.085311    1602 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0814 09:09:06.085399    1602 cni.go:84] Creating CNI manager for ""
	I0814 09:09:06.085423    1602 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0814 09:09:06.085479    1602 start.go:340] cluster config:
	{Name:download-only-622000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-622000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 09:09:06.090945    1602 iso.go:125] acquiring lock: {Name:mkda50f549908e1040ce4e19ea38376a2f640f3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:09:06.095420    1602 out.go:97] Downloading VM boot image ...
	I0814 09:09:06.095435    1602 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/iso/arm64/minikube-v1.33.1-1723567878-19429-arm64.iso
	I0814 09:09:14.926864    1602 out.go:97] Starting "download-only-622000" primary control-plane node in "download-only-622000" cluster
	I0814 09:09:14.926897    1602 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0814 09:09:14.989182    1602 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0814 09:09:14.989191    1602 cache.go:56] Caching tarball of preloaded images
	I0814 09:09:14.989386    1602 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0814 09:09:14.994426    1602 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0814 09:09:14.994433    1602 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0814 09:09:15.084448    1602 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0814 09:09:25.127013    1602 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0814 09:09:25.127195    1602 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0814 09:09:25.826672    1602 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0814 09:09:25.826888    1602 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/download-only-622000/config.json ...
	I0814 09:09:25.826907    1602 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/download-only-622000/config.json: {Name:mk45d6afe9bef05848dff417b6d0ed76463e3de4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:09:25.827157    1602 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0814 09:09:25.827397    1602 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0814 09:09:26.198437    1602 out.go:169] 
	W0814 09:09:26.202335    1602 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19446-1067/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108bcb920 0x108bcb920 0x108bcb920 0x108bcb920 0x108bcb920 0x108bcb920 0x108bcb920] Decompressors:map[bz2:0x14000619090 gz:0x14000619098 tar:0x14000619040 tar.bz2:0x14000619050 tar.gz:0x14000619060 tar.xz:0x14000619070 tar.zst:0x14000619080 tbz2:0x14000619050 tgz:0x14000619060 txz:0x14000619070 tzst:0x14000619080 xz:0x140006190a0 zip:0x140006190b0 zst:0x140006190a8] Getters:map[file:0x14000062880 http:0x1400068e320 https:0x1400068e370] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0814 09:09:26.202361    1602 out_reason.go:110] 
	W0814 09:09:26.211367    1602 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 09:09:26.216252    1602 out.go:169] 
	
	
	* The control-plane node download-only-622000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-622000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-622000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (12.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-088000 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-088000 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=qemu2 : (12.661493791s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (12.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-088000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-088000: exit status 85 (80.648125ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-622000 | jenkins | v1.33.1 | 14 Aug 24 09:09 PDT |                     |
	|         | -p download-only-622000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 14 Aug 24 09:09 PDT | 14 Aug 24 09:09 PDT |
	| delete  | -p download-only-622000        | download-only-622000 | jenkins | v1.33.1 | 14 Aug 24 09:09 PDT | 14 Aug 24 09:09 PDT |
	| start   | -o=json --download-only        | download-only-088000 | jenkins | v1.33.1 | 14 Aug 24 09:09 PDT |                     |
	|         | -p download-only-088000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/14 09:09:26
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 09:09:26.633739    1629 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:09:26.633890    1629 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:09:26.633893    1629 out.go:304] Setting ErrFile to fd 2...
	I0814 09:09:26.633895    1629 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:09:26.634035    1629 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:09:26.635088    1629 out.go:298] Setting JSON to true
	I0814 09:09:26.651336    1629 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":523,"bootTime":1723651243,"procs":500,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0814 09:09:26.651400    1629 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0814 09:09:26.656280    1629 out.go:97] [download-only-088000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0814 09:09:26.656436    1629 notify.go:220] Checking for updates...
	I0814 09:09:26.660200    1629 out.go:169] MINIKUBE_LOCATION=19446
	I0814 09:09:26.663295    1629 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	I0814 09:09:26.667255    1629 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0814 09:09:26.670287    1629 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 09:09:26.673300    1629 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	W0814 09:09:26.679226    1629 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0814 09:09:26.679410    1629 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 09:09:26.682231    1629 out.go:97] Using the qemu2 driver based on user configuration
	I0814 09:09:26.682241    1629 start.go:297] selected driver: qemu2
	I0814 09:09:26.682246    1629 start.go:901] validating driver "qemu2" against <nil>
	I0814 09:09:26.682307    1629 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0814 09:09:26.685278    1629 out.go:169] Automatically selected the socket_vmnet network
	I0814 09:09:26.690462    1629 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0814 09:09:26.690560    1629 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0814 09:09:26.690598    1629 cni.go:84] Creating CNI manager for ""
	I0814 09:09:26.690605    1629 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0814 09:09:26.690611    1629 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0814 09:09:26.690655    1629 start.go:340] cluster config:
	{Name:download-only-088000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-088000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 09:09:26.694082    1629 iso.go:125] acquiring lock: {Name:mkda50f549908e1040ce4e19ea38376a2f640f3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 09:09:26.697278    1629 out.go:97] Starting "download-only-088000" primary control-plane node in "download-only-088000" cluster
	I0814 09:09:26.697288    1629 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0814 09:09:26.757928    1629 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0814 09:09:26.757960    1629 cache.go:56] Caching tarball of preloaded images
	I0814 09:09:26.758850    1629 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0814 09:09:26.763169    1629 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0814 09:09:26.763176    1629 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	I0814 09:09:26.849535    1629 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4?checksum=md5:90c22abece392b762c0b4e45be981bb4 -> /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0814 09:09:35.865930    1629 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	I0814 09:09:35.866093    1629 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	I0814 09:09:36.389614    1629 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0814 09:09:36.389823    1629 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/download-only-088000/config.json ...
	I0814 09:09:36.389841    1629 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/download-only-088000/config.json: {Name:mke8adcdf0232ff120a6fdda6aaab103232d54fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 09:09:36.390086    1629 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0814 09:09:36.390224    1629 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19446-1067/.minikube/cache/darwin/arm64/v1.31.0/kubectl
	
	
	* The control-plane node download-only-088000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-088000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-088000
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.3s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-643000 --alsologtostderr --binary-mirror http://127.0.0.1:49313 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-643000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-643000
--- PASS: TestBinaryMirror (0.30s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-937000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-937000: exit status 85 (57.439542ms)

                                                
                                                
-- stdout --
	* Profile "addons-937000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-937000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-937000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-937000: exit status 85 (53.414166ms)

                                                
                                                
-- stdout --
	* Profile "addons-937000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-937000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (141.12s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-937000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-darwin-arm64 start -p addons-937000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: (2m21.1230145s)
--- PASS: TestAddons/Setup (141.12s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.18s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:905: volcano-admission stabilized in 7.521125ms
addons_test.go:897: volcano-scheduler stabilized in 7.5505ms
addons_test.go:913: volcano-controller stabilized in 7.599833ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-mpctb" [31ad0e94-551c-4866-8922-cca3b03f1c3b] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.00459725s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-89d76" [10310e18-e0e0-4ccc-8df5-066736cf2c32] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.006033792s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-8dmmj" [097b334c-62b9-444a-9943-779decbb93b4] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 6.005644375s
addons_test.go:932: (dbg) Run:  kubectl --context addons-937000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-937000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-937000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [fa145542-5e8e-4141-9d14-89865107da5e] Pending
helpers_test.go:344: "test-job-nginx-0" [fa145542-5e8e-4141-9d14-89865107da5e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [fa145542-5e8e-4141-9d14-89865107da5e] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.006297125s
addons_test.go:968: (dbg) Run:  out/minikube-darwin-arm64 -p addons-937000 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-darwin-arm64 -p addons-937000 addons disable volcano --alsologtostderr -v=1: (9.910880667s)
--- PASS: TestAddons/serial/Volcano (39.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-937000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-937000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.46s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.1625ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-8xnfv" [df252dae-74b3-4a13-82b2-a9c5a015adac] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005643791s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-p88jn" [8dbaed67-1e3f-485e-b725-cfee93f9e5f0] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00473725s
addons_test.go:342: (dbg) Run:  kubectl --context addons-937000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-937000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-937000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.177053292s)
addons_test.go:361: (dbg) Run:  out/minikube-darwin-arm64 -p addons-937000 ip
2024/08/14 09:13:11 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 -p addons-937000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.46s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.5s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-937000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-937000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-937000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [fdcc309d-9d4e-433f-8bc2-10989e5d99ce] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [fdcc309d-9d4e-433f-8bc2-10989e5d99ce] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.007745541s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-arm64 -p addons-937000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-937000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-arm64 -p addons-937000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p addons-937000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-darwin-arm64 -p addons-937000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-arm64 -p addons-937000 addons disable ingress --alsologtostderr -v=1: (7.280743709s)
--- PASS: TestAddons/parallel/Ingress (18.50s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.26s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-grfpm" [8a710715-506c-4289-bd16-13ae582ec6d4] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004093667s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-937000
addons_test.go:851: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-937000: (5.255776333s)
--- PASS: TestAddons/parallel/InspektorGadget (11.26s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.28s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.315ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-8bq5s" [753527e4-6321-4c93-aed1-0338fdc06fe8] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.008334291s
addons_test.go:417: (dbg) Run:  kubectl --context addons-937000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-arm64 -p addons-937000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.28s)

                                                
                                    
x
+
TestAddons/parallel/CSI (48.54s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 20.394541ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-937000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-937000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [723e42a5-8678-4fa3-b3b2-2512f5c35baa] Pending
helpers_test.go:344: "task-pv-pod" [723e42a5-8678-4fa3-b3b2-2512f5c35baa] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [723e42a5-8678-4fa3-b3b2-2512f5c35baa] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.010816333s
addons_test.go:590: (dbg) Run:  kubectl --context addons-937000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-937000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-937000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-937000 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-937000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-937000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-937000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [c1efa6eb-b101-4de2-af3d-b9d441967244] Pending
helpers_test.go:344: "task-pv-pod-restore" [c1efa6eb-b101-4de2-af3d-b9d441967244] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [c1efa6eb-b101-4de2-af3d-b9d441967244] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004296041s
addons_test.go:632: (dbg) Run:  kubectl --context addons-937000 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-937000 delete pod task-pv-pod-restore: (1.0543705s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-937000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-937000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-arm64 -p addons-937000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-arm64 -p addons-937000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.127381417s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-arm64 -p addons-937000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (48.54s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-937000 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-psz8h" [84b281fc-8c0f-4ba6-81f9-adff6822f407] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-psz8h" [84b281fc-8c0f-4ba6-81f9-adff6822f407] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.020723208s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-arm64 -p addons-937000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-darwin-arm64 -p addons-937000 addons disable headlamp --alsologtostderr -v=1: (5.290436375s)
--- PASS: TestAddons/parallel/Headlamp (17.65s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.17s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-c4bc9b5f8-hxp7j" [d8411882-c06a-4f4e-b3db-fb56aa48bed9] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004847792s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-937000
--- PASS: TestAddons/parallel/CloudSpanner (5.17s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (40.86s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-937000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-937000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [63d460eb-ebfc-43ff-b477-2a4555b807b4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [63d460eb-ebfc-43ff-b477-2a4555b807b4] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [63d460eb-ebfc-43ff-b477-2a4555b807b4] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004269416s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-937000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-arm64 -p addons-937000 ssh "cat /opt/local-path-provisioner/pvc-01d18535-4f3d-45d2-965b-de9efea65743_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-937000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-937000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 -p addons-937000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-darwin-arm64 -p addons-937000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (31.393599833s)
--- PASS: TestAddons/parallel/LocalPath (40.86s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.15s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-gmp7k" [291a86f3-538d-4137-880c-e78b982cc12a] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004320833s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-937000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.15s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-7ctt9" [f430d815-23a6-416c-a983-684918627a7a] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004076834s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-arm64 -p addons-937000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-arm64 -p addons-937000 addons disable yakd --alsologtostderr -v=1: (5.197534917s)
--- PASS: TestAddons/parallel/Yakd (10.20s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.43s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-937000
addons_test.go:174: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-937000: (12.238236458s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-937000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-937000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-937000
--- PASS: TestAddons/StoppedEnableDisable (12.43s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (11.17s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (11.17s)

                                                
                                    
x
+
TestErrorSpam/setup (35.37s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-702000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-702000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-702000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-702000 --driver=qemu2 : (35.366308875s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.0."
--- PASS: TestErrorSpam/setup (35.37s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-702000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-702000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-702000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-702000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-702000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-702000 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.26s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-702000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-702000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-702000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-702000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-702000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-702000 status
--- PASS: TestErrorSpam/status (0.26s)

                                                
                                    
x
+
TestErrorSpam/pause (0.7s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-702000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-702000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-702000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-702000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-702000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-702000 pause
--- PASS: TestErrorSpam/pause (0.70s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.64s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-702000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-702000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-702000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-702000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-702000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-702000 unpause
--- PASS: TestErrorSpam/unpause (0.64s)

                                                
                                    
x
+
TestErrorSpam/stop (64.28s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-702000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-702000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-702000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-702000 stop: (12.217063667s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-702000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-702000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-702000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-702000 stop: (26.029434958s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-702000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-702000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-702000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-702000 stop: (26.030197625s)
--- PASS: TestErrorSpam/stop (64.28s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19446-1067/.minikube/files/etc/test/nested/copy/1600/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (45.57s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-363000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
E0814 09:17:01.216863    1600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/addons-937000/client.crt: no such file or directory" logger="UnhandledError"
E0814 09:17:01.224454    1600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/addons-937000/client.crt: no such file or directory" logger="UnhandledError"
E0814 09:17:01.237328    1600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/addons-937000/client.crt: no such file or directory" logger="UnhandledError"
E0814 09:17:01.259046    1600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/addons-937000/client.crt: no such file or directory" logger="UnhandledError"
E0814 09:17:01.302420    1600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/addons-937000/client.crt: no such file or directory" logger="UnhandledError"
E0814 09:17:01.385765    1600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/addons-937000/client.crt: no such file or directory" logger="UnhandledError"
E0814 09:17:01.549115    1600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/addons-937000/client.crt: no such file or directory" logger="UnhandledError"
E0814 09:17:01.872235    1600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/addons-937000/client.crt: no such file or directory" logger="UnhandledError"
E0814 09:17:02.514037    1600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/addons-937000/client.crt: no such file or directory" logger="UnhandledError"
E0814 09:17:03.795494    1600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/addons-937000/client.crt: no such file or directory" logger="UnhandledError"
E0814 09:17:06.358898    1600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/addons-937000/client.crt: no such file or directory" logger="UnhandledError"
E0814 09:17:11.482365    1600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/addons-937000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-darwin-arm64 start -p functional-363000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (45.569105167s)
--- PASS: TestFunctional/serial/StartWithProxy (45.57s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.11s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-363000 --alsologtostderr -v=8
E0814 09:17:21.724106    1600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/addons-937000/client.crt: no such file or directory" logger="UnhandledError"
E0814 09:17:42.207137    1600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/addons-937000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-darwin-arm64 start -p functional-363000 --alsologtostderr -v=8: (36.112727291s)
functional_test.go:663: soft start took 36.113143125s for "functional-363000" cluster.
--- PASS: TestFunctional/serial/SoftStart (36.11s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-363000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.81s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-363000 cache add registry.k8s.io/pause:3.1: (1.130941s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.81s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-363000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local518228570/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 cache add minikube-local-cache-test:functional-363000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 cache delete minikube-local-cache-test:functional-363000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-363000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-363000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (70.426333ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.85s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 kubectl -- --context functional-363000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.85s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.02s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-363000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-363000 get pods: (1.015714458s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.02s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (57.36s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-363000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0814 09:18:23.169747    1600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/addons-937000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-darwin-arm64 start -p functional-363000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (57.357949792s)
functional_test.go:761: restart took 57.358048708s for "functional-363000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (57.36s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-363000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.66s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.67s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd518490876/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.67s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.47s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-363000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-363000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-363000: exit status 115 (135.878833ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:31623 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-363000 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-363000 delete -f testdata/invalidsvc.yaml: (1.230822791s)
--- PASS: TestFunctional/serial/InvalidService (4.47s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-363000 config get cpus: exit status 14 (30.017334ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-363000 config get cpus: exit status 14 (31.638541ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-363000 --alsologtostderr -v=1]
E0814 09:19:45.090745    1600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/addons-937000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-363000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2294: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (6.59s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-363000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-363000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (115.496208ms)

                                                
                                                
-- stdout --
	* [functional-363000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19446
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:19:43.568239    2281 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:19:43.568369    2281 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:19:43.568372    2281 out.go:304] Setting ErrFile to fd 2...
	I0814 09:19:43.568374    2281 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:19:43.568488    2281 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:19:43.569468    2281 out.go:298] Setting JSON to false
	I0814 09:19:43.585683    2281 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1140,"bootTime":1723651243,"procs":501,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0814 09:19:43.585800    2281 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0814 09:19:43.590429    2281 out.go:177] * [functional-363000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0814 09:19:43.597333    2281 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 09:19:43.597371    2281 notify.go:220] Checking for updates...
	I0814 09:19:43.604292    2281 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	I0814 09:19:43.607347    2281 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0814 09:19:43.610323    2281 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 09:19:43.611699    2281 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	I0814 09:19:43.614320    2281 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 09:19:43.617605    2281 config.go:182] Loaded profile config "functional-363000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:19:43.617864    2281 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 09:19:43.622204    2281 out.go:177] * Using the qemu2 driver based on existing profile
	I0814 09:19:43.629366    2281 start.go:297] selected driver: qemu2
	I0814 09:19:43.629374    2281 start.go:901] validating driver "qemu2" against &{Name:functional-363000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:functional-363000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 09:19:43.629448    2281 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 09:19:43.635348    2281 out.go:177] 
	W0814 09:19:43.639346    2281 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0814 09:19:43.643318    2281 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-363000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-363000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-363000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (112.475875ms)

                                                
                                                
-- stdout --
	* [functional-363000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19446
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 09:19:43.451720    2277 out.go:291] Setting OutFile to fd 1 ...
	I0814 09:19:43.451846    2277 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:19:43.451850    2277 out.go:304] Setting ErrFile to fd 2...
	I0814 09:19:43.451852    2277 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 09:19:43.451982    2277 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
	I0814 09:19:43.453456    2277 out.go:298] Setting JSON to false
	I0814 09:19:43.471404    2277 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1140,"bootTime":1723651243,"procs":501,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0814 09:19:43.471497    2277 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0814 09:19:43.476321    2277 out.go:177] * [functional-363000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0814 09:19:43.484394    2277 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 09:19:43.484495    2277 notify.go:220] Checking for updates...
	I0814 09:19:43.492365    2277 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	I0814 09:19:43.496323    2277 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0814 09:19:43.499372    2277 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 09:19:43.502319    2277 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	I0814 09:19:43.505335    2277 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 09:19:43.508652    2277 config.go:182] Loaded profile config "functional-363000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0814 09:19:43.508893    2277 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 09:19:43.513314    2277 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0814 09:19:43.516300    2277 start.go:297] selected driver: qemu2
	I0814 09:19:43.516306    2277 start.go:901] validating driver "qemu2" against &{Name:functional-363000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:functional-363000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 09:19:43.516356    2277 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 09:19:43.523354    2277 out.go:177] 
	W0814 09:19:43.527379    2277 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0814 09:19:43.531302    2277 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [c685c553-4915-4e56-a6da-d0ae2593c730] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.007068333s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-363000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-363000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-363000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-363000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [086a0602-417f-4003-992c-6bc513e0f30e] Pending
helpers_test.go:344: "sp-pod" [086a0602-417f-4003-992c-6bc513e0f30e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [086a0602-417f-4003-992c-6bc513e0f30e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.010138209s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-363000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-363000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-363000 delete -f testdata/storage-provisioner/pod.yaml: (1.057638125s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-363000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [00f3de12-46dd-4a61-b353-ff109852273f] Pending
helpers_test.go:344: "sp-pod" [00f3de12-46dd-4a61-b353-ff109852273f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [00f3de12-46dd-4a61-b353-ff109852273f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.012605458s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-363000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.60s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 ssh -n functional-363000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 cp functional-363000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd2339820878/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 ssh -n functional-363000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 ssh -n functional-363000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-darwin-arm64 -p functional-363000 ssh -n functional-363000 "sudo cat /tmp/does/not/exist/cp-test.txt": (1.398197917s)
--- PASS: TestFunctional/parallel/CpCmd (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1600/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 ssh "sudo cat /etc/test/nested/copy/1600/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1600.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 ssh "sudo cat /etc/ssl/certs/1600.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1600.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 ssh "sudo cat /usr/share/ca-certificates/1600.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/16002.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 ssh "sudo cat /etc/ssl/certs/16002.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/16002.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 ssh "sudo cat /usr/share/ca-certificates/16002.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-363000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-363000 ssh "sudo systemctl is-active crio": exit status 1 (64.294542ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
2024/08/14 09:19:50 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/License (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-363000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-363000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-363000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2127: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-363000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-363000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-363000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [ef4a7041-e4a7-42b0-8671-deb0c5302430] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [ef4a7041-e4a7-42b0-8671-deb0c5302430] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.003703875s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-363000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.111.107.17 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-363000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-363000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-363000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-lbmsm" [37fcf8a6-1523-423e-9e21-eac41f1f779e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-lbmsm" [37fcf8a6-1523-423e-9e21-eac41f1f779e] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.01021575s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 service list -o json
functional_test.go:1494: Took "300.82075ms" to run "out/minikube-darwin-arm64 -p functional-363000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.105.4:32479
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.105.4:32479
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "91.054083ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "35.825917ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "88.333791ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "33.596125ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-363000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3669060053/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1723652374185009000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3669060053/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1723652374185009000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3669060053/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1723652374185009000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3669060053/001/test-1723652374185009000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-363000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (61.530584ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-363000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (83.457792ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 14 16:19 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 14 16:19 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 14 16:19 test-1723652374185009000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 ssh cat /mount-9p/test-1723652374185009000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-363000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [9a352448-e812-4073-a7d1-e480f6cb86d3] Pending
helpers_test.go:344: "busybox-mount" [9a352448-e812-4073-a7d1-e480f6cb86d3] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [9a352448-e812-4073-a7d1-e480f6cb86d3] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [9a352448-e812-4073-a7d1-e480f6cb86d3] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003774584s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-363000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-363000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3669060053/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.13s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-363000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port7519249/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-363000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (63.992833ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-363000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port7519249/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-363000 ssh "sudo umount -f /mount-9p": exit status 1 (65.0675ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-363000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-363000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port7519249/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-363000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2836227861/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-363000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2836227861/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-363000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2836227861/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-363000 ssh "findmnt -T" /mount1: exit status 1 (74.488292ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-363000 ssh "findmnt -T" /mount1: exit status 1 (74.258542ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-363000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-363000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2836227861/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-363000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2836227861/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-363000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2836227861/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-363000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-363000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-363000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-363000 image ls --format short --alsologtostderr:
I0814 09:19:54.761976    2437 out.go:291] Setting OutFile to fd 1 ...
I0814 09:19:54.762143    2437 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0814 09:19:54.762147    2437 out.go:304] Setting ErrFile to fd 2...
I0814 09:19:54.762150    2437 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0814 09:19:54.762272    2437 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
I0814 09:19:54.762671    2437 config.go:182] Loaded profile config "functional-363000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0814 09:19:54.762739    2437 config.go:182] Loaded profile config "functional-363000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0814 09:19:54.763574    2437 ssh_runner.go:195] Run: systemctl --version
I0814 09:19:54.763586    2437 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/functional-363000/id_rsa Username:docker}
I0814 09:19:54.881196    2437 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-363000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-scheduler              | v1.31.0           | fbbbd428abb4d | 66MB   |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/kube-proxy                  | v1.31.0           | 71d55d66fd4ee | 94.7MB |
| registry.k8s.io/coredns/coredns             | v1.11.1           | 2437cf7621777 | 57.4MB |
| docker.io/kicbase/echo-server               | functional-363000 | ce2d2cda2d858 | 4.78MB |
| docker.io/library/minikube-local-cache-test | functional-363000 | 923638ff1c820 | 30B    |
| registry.k8s.io/kube-apiserver              | v1.31.0           | cd0f0ae0ec9e0 | 91.5MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| docker.io/library/nginx                     | latest            | 235ff27fe7956 | 193MB  |
| docker.io/library/nginx                     | alpine            | d7cd33d7d4ed1 | 44.8MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| registry.k8s.io/kube-controller-manager     | v1.31.0           | fcb0683e6bdbd | 85.9MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-363000 image ls --format table --alsologtostderr:
I0814 09:19:54.948176    2447 out.go:291] Setting OutFile to fd 1 ...
I0814 09:19:54.948312    2447 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0814 09:19:54.948316    2447 out.go:304] Setting ErrFile to fd 2...
I0814 09:19:54.948319    2447 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0814 09:19:54.948457    2447 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
I0814 09:19:54.948875    2447 config.go:182] Loaded profile config "functional-363000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0814 09:19:54.948944    2447 config.go:182] Loaded profile config "functional-363000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0814 09:19:54.949706    2447 ssh_runner.go:195] Run: systemctl --version
I0814 09:19:54.949714    2447 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/functional-363000/id_rsa Username:docker}
I0814 09:19:54.977816    2447 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-363000 image ls --format json --alsologtostderr:
[{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"66000000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-363000"],"size":"4780000"},{"id":"fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"85900000"},{"id":"27e3
830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"235ff27fe79567e8ccaf4d26a2d24828a65898a83b97fba3c7e39ec4621e1b51","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388","repoDi
gests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"91500000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"94700000"},{"id":"d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"44800000"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"57400000"},{"id":"923638ff1c8206ec2f4a7e97572eff4f780aed00d4027a009b61a30a1b2eb971","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-363000
"],"size":"30"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-363000 image ls --format json --alsologtostderr:
I0814 09:19:54.945078    2446 out.go:291] Setting OutFile to fd 1 ...
I0814 09:19:54.945237    2446 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0814 09:19:54.945241    2446 out.go:304] Setting ErrFile to fd 2...
I0814 09:19:54.945243    2446 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0814 09:19:54.945373    2446 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
I0814 09:19:54.945804    2446 config.go:182] Loaded profile config "functional-363000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0814 09:19:54.945870    2446 config.go:182] Loaded profile config "functional-363000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0814 09:19:54.946757    2446 ssh_runner.go:195] Run: systemctl --version
I0814 09:19:54.946768    2446 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/functional-363000/id_rsa Username:docker}
I0814 09:19:54.975270    2446 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-363000 image ls --format yaml --alsologtostderr:
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: 235ff27fe79567e8ccaf4d26a2d24828a65898a83b97fba3c7e39ec4621e1b51
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-363000
size: "4780000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "85900000"
- id: d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "44800000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 923638ff1c8206ec2f4a7e97572eff4f780aed00d4027a009b61a30a1b2eb971
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-363000
size: "30"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "57400000"
- id: cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "91500000"
- id: fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "66000000"
- id: 71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "94700000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-363000 image ls --format yaml --alsologtostderr:
I0814 09:19:54.761976    2436 out.go:291] Setting OutFile to fd 1 ...
I0814 09:19:54.762150    2436 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0814 09:19:54.762153    2436 out.go:304] Setting ErrFile to fd 2...
I0814 09:19:54.762156    2436 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0814 09:19:54.762293    2436 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
I0814 09:19:54.762718    2436 config.go:182] Loaded profile config "functional-363000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0814 09:19:54.762797    2436 config.go:182] Loaded profile config "functional-363000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0814 09:19:54.763953    2436 ssh_runner.go:195] Run: systemctl --version
I0814 09:19:54.763961    2436 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/functional-363000/id_rsa Username:docker}
I0814 09:19:54.881362    2436 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-363000 ssh pgrep buildkitd: exit status 1 (60.4205ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 image build -t localhost/my-image:functional-363000 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-darwin-arm64 -p functional-363000 image build -t localhost/my-image:functional-363000 testdata/build --alsologtostderr: (1.776159583s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-363000 image build -t localhost/my-image:functional-363000 testdata/build --alsologtostderr:
I0814 09:19:55.080302    2452 out.go:291] Setting OutFile to fd 1 ...
I0814 09:19:55.080526    2452 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0814 09:19:55.080529    2452 out.go:304] Setting ErrFile to fd 2...
I0814 09:19:55.080535    2452 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0814 09:19:55.080654    2452 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19446-1067/.minikube/bin
I0814 09:19:55.081053    2452 config.go:182] Loaded profile config "functional-363000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0814 09:19:55.081718    2452 config.go:182] Loaded profile config "functional-363000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0814 09:19:55.082509    2452 ssh_runner.go:195] Run: systemctl --version
I0814 09:19:55.082517    2452 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19446-1067/.minikube/machines/functional-363000/id_rsa Username:docker}
I0814 09:19:55.110086    2452 build_images.go:161] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.3746185425.tar
I0814 09:19:55.110159    2452 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0814 09:19:55.113833    2452 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3746185425.tar
I0814 09:19:55.115486    2452 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3746185425.tar: stat -c "%s %y" /var/lib/minikube/build/build.3746185425.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3746185425.tar': No such file or directory
I0814 09:19:55.115501    2452 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.3746185425.tar --> /var/lib/minikube/build/build.3746185425.tar (3072 bytes)
I0814 09:19:55.126388    2452 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3746185425
I0814 09:19:55.129708    2452 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3746185425 -xf /var/lib/minikube/build/build.3746185425.tar
I0814 09:19:55.132808    2452 docker.go:360] Building image: /var/lib/minikube/build/build.3746185425
I0814 09:19:55.132859    2452 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-363000 /var/lib/minikube/build/build.3746185425
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.1s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.3s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:d6d763694a79c86ced442a9cbdfd997803c87dfca6ed0d8f53f80f7d7450dc05 done
#8 naming to localhost/my-image:functional-363000 done
#8 DONE 0.0s
I0814 09:19:56.813257    2452 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-363000 /var/lib/minikube/build/build.3746185425: (1.680442584s)
I0814 09:19:56.813324    2452 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3746185425
I0814 09:19:56.817092    2452 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3746185425.tar
I0814 09:19:56.820117    2452 build_images.go:217] Built localhost/my-image:functional-363000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.3746185425.tar
I0814 09:19:56.820135    2452 build_images.go:133] succeeded building to: functional-363000
I0814 09:19:56.820138    2452 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.683808292s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-363000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-363000 docker-env) && out/minikube-darwin-arm64 status -p functional-363000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-363000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 image load --daemon kicbase/echo-server:functional-363000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 image load --daemon kicbase/echo-server:functional-363000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-363000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 image load --daemon kicbase/echo-server:functional-363000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 image save kicbase/echo-server:functional-363000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 image rm kicbase/echo-server:functional-363000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-363000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-363000 image save --daemon kicbase/echo-server:functional-363000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-363000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.17s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-363000
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-363000
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-363000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (187.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-243000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0814 09:22:01.205463    1600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/addons-937000/client.crt: no such file or directory" logger="UnhandledError"
E0814 09:22:28.928669    1600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/addons-937000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-243000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (3m6.945159833s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (187.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-243000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-243000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-243000 -- rollout status deployment/busybox: (2.66305475s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-243000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-243000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-243000 -- exec busybox-7dff88458-j46vt -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-243000 -- exec busybox-7dff88458-m2xxk -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-243000 -- exec busybox-7dff88458-mwccx -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-243000 -- exec busybox-7dff88458-j46vt -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-243000 -- exec busybox-7dff88458-m2xxk -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-243000 -- exec busybox-7dff88458-mwccx -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-243000 -- exec busybox-7dff88458-j46vt -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-243000 -- exec busybox-7dff88458-m2xxk -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-243000 -- exec busybox-7dff88458-mwccx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-243000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-243000 -- exec busybox-7dff88458-j46vt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-243000 -- exec busybox-7dff88458-j46vt -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-243000 -- exec busybox-7dff88458-m2xxk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-243000 -- exec busybox-7dff88458-m2xxk -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-243000 -- exec busybox-7dff88458-mwccx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-243000 -- exec busybox-7dff88458-mwccx -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (54.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-243000 -v=7 --alsologtostderr
E0814 09:23:59.984461    1600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/functional-363000/client.crt: no such file or directory" logger="UnhandledError"
E0814 09:23:59.991605    1600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/functional-363000/client.crt: no such file or directory" logger="UnhandledError"
E0814 09:24:00.005021    1600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/functional-363000/client.crt: no such file or directory" logger="UnhandledError"
E0814 09:24:00.028387    1600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/functional-363000/client.crt: no such file or directory" logger="UnhandledError"
E0814 09:24:00.071821    1600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/functional-363000/client.crt: no such file or directory" logger="UnhandledError"
E0814 09:24:00.155178    1600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/functional-363000/client.crt: no such file or directory" logger="UnhandledError"
E0814 09:24:00.318667    1600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/functional-363000/client.crt: no such file or directory" logger="UnhandledError"
E0814 09:24:00.642075    1600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/functional-363000/client.crt: no such file or directory" logger="UnhandledError"
E0814 09:24:01.284910    1600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/functional-363000/client.crt: no such file or directory" logger="UnhandledError"
E0814 09:24:02.568335    1600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/functional-363000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-243000 -v=7 --alsologtostderr: (54.767011583s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (54.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-243000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 cp testdata/cp-test.txt ha-243000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 ssh -n ha-243000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 cp ha-243000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile3346062552/001/cp-test_ha-243000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 ssh -n ha-243000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 cp ha-243000:/home/docker/cp-test.txt ha-243000-m02:/home/docker/cp-test_ha-243000_ha-243000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 ssh -n ha-243000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 ssh -n ha-243000-m02 "sudo cat /home/docker/cp-test_ha-243000_ha-243000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 cp ha-243000:/home/docker/cp-test.txt ha-243000-m03:/home/docker/cp-test_ha-243000_ha-243000-m03.txt
E0814 09:24:05.131701    1600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/functional-363000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 ssh -n ha-243000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 ssh -n ha-243000-m03 "sudo cat /home/docker/cp-test_ha-243000_ha-243000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 cp ha-243000:/home/docker/cp-test.txt ha-243000-m04:/home/docker/cp-test_ha-243000_ha-243000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 ssh -n ha-243000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 ssh -n ha-243000-m04 "sudo cat /home/docker/cp-test_ha-243000_ha-243000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 cp testdata/cp-test.txt ha-243000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 ssh -n ha-243000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 cp ha-243000-m02:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile3346062552/001/cp-test_ha-243000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 ssh -n ha-243000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 cp ha-243000-m02:/home/docker/cp-test.txt ha-243000:/home/docker/cp-test_ha-243000-m02_ha-243000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 ssh -n ha-243000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 ssh -n ha-243000 "sudo cat /home/docker/cp-test_ha-243000-m02_ha-243000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 cp ha-243000-m02:/home/docker/cp-test.txt ha-243000-m03:/home/docker/cp-test_ha-243000-m02_ha-243000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 ssh -n ha-243000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 ssh -n ha-243000-m03 "sudo cat /home/docker/cp-test_ha-243000-m02_ha-243000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 cp ha-243000-m02:/home/docker/cp-test.txt ha-243000-m04:/home/docker/cp-test_ha-243000-m02_ha-243000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 ssh -n ha-243000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 ssh -n ha-243000-m04 "sudo cat /home/docker/cp-test_ha-243000-m02_ha-243000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 cp testdata/cp-test.txt ha-243000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 ssh -n ha-243000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 cp ha-243000-m03:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile3346062552/001/cp-test_ha-243000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 ssh -n ha-243000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 cp ha-243000-m03:/home/docker/cp-test.txt ha-243000:/home/docker/cp-test_ha-243000-m03_ha-243000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 ssh -n ha-243000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 ssh -n ha-243000 "sudo cat /home/docker/cp-test_ha-243000-m03_ha-243000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 cp ha-243000-m03:/home/docker/cp-test.txt ha-243000-m02:/home/docker/cp-test_ha-243000-m03_ha-243000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 ssh -n ha-243000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 ssh -n ha-243000-m02 "sudo cat /home/docker/cp-test_ha-243000-m03_ha-243000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 cp ha-243000-m03:/home/docker/cp-test.txt ha-243000-m04:/home/docker/cp-test_ha-243000-m03_ha-243000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 ssh -n ha-243000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 ssh -n ha-243000-m04 "sudo cat /home/docker/cp-test_ha-243000-m03_ha-243000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 cp testdata/cp-test.txt ha-243000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 ssh -n ha-243000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 cp ha-243000-m04:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile3346062552/001/cp-test_ha-243000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 ssh -n ha-243000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 cp ha-243000-m04:/home/docker/cp-test.txt ha-243000:/home/docker/cp-test_ha-243000-m04_ha-243000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 ssh -n ha-243000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 ssh -n ha-243000 "sudo cat /home/docker/cp-test_ha-243000-m04_ha-243000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 cp ha-243000-m04:/home/docker/cp-test.txt ha-243000-m02:/home/docker/cp-test_ha-243000-m04_ha-243000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 ssh -n ha-243000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 ssh -n ha-243000-m02 "sudo cat /home/docker/cp-test_ha-243000-m04_ha-243000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 cp ha-243000-m04:/home/docker/cp-test.txt ha-243000-m03:/home/docker/cp-test_ha-243000-m04_ha-243000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 ssh -n ha-243000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-243000 ssh -n ha-243000-m03 "sudo cat /home/docker/cp-test_ha-243000-m04_ha-243000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (78.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0814 09:33:24.264618    1600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19446-1067/.minikube/profiles/addons-937000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m18.3247215s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (78.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (1.82s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-079000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-079000 --output=json --user=testUser: (1.822732625s)
--- PASS: TestJSONOutput/stop/Command (1.82s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-574000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-574000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (91.305541ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"181c6fda-3327-4b34-b3ed-4e1dca15584e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-574000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3719b7ce-c6bf-4343-ab71-e9f2a4133398","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19446"}}
	{"specversion":"1.0","id":"d5d6c636-e65c-4d5f-ba35-145c753bbdcf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig"}}
	{"specversion":"1.0","id":"bbcd9c5a-8264-47fb-98f1-716cb95774b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"c6eca0e4-fa9a-4c17-9511-c2c369fe6cc6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6ca6ca3c-358c-4a4e-8dce-a69e56a9f755","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube"}}
	{"specversion":"1.0","id":"1d633e8e-11af-4f68-81ad-98f0b90a8083","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"410aa91a-e010-4455-82a7-1ceef5aa1ffb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-574000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-574000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.85s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.85s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.77s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-996000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-463000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-463000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (95.009917ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-463000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19446
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19446-1067/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19446-1067/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-463000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-463000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (41.118792ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-463000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-463000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-463000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-463000: (2.714965666s)
--- PASS: TestNoKubernetes/serial/Stop (2.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-463000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-463000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (41.334ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-463000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-463000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-629000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-629000 --alsologtostderr -v=3: (3.586342291s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-629000 -n old-k8s-version-629000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-629000 -n old-k8s-version-629000: exit status 7 (59.770916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-629000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.76s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-843000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-843000 --alsologtostderr -v=3: (3.755961583s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.76s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-843000 -n no-preload-843000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-843000 -n no-preload-843000: exit status 7 (53.702417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-843000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-377000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-377000 --alsologtostderr -v=3: (3.297983542s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-377000 -n embed-certs-377000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-377000 -n embed-certs-377000: exit status 7 (55.536208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-377000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (1.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-969000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-969000 --alsologtostderr -v=3: (1.908905792s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (1.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-969000 -n default-k8s-diff-port-969000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-969000 -n default-k8s-diff-port-969000: exit status 7 (56.668958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-969000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-158000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-158000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-158000 --alsologtostderr -v=3: (3.146182208s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-158000 -n newest-cni-158000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-158000 -n newest-cni-158000: exit status 7 (55.366875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-158000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (21/274)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-625000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-625000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-625000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-625000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-625000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-625000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-625000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-625000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-625000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-625000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-625000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-625000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-625000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-625000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-625000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-625000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-625000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-625000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-625000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-625000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-625000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-625000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-625000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-625000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-625000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-625000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-625000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-625000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-625000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-625000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-625000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-625000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-625000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-625000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-625000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-625000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-625000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-625000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-625000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-625000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-625000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-625000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-625000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-625000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-625000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-625000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-625000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-625000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-625000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-625000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-625000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-625000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-625000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-625000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-625000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-625000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-625000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-625000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-625000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-625000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-625000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-625000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-625000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-625000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-625000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-625000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-625000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-625000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-625000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-625000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-625000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-625000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-625000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-625000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-625000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-625000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-625000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-625000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-625000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-625000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-625000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-625000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-625000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-625000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-625000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-625000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-625000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-625000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-625000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-625000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-625000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-625000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-625000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-625000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-625000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-625000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-625000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-625000"

                                                
                                                
----------------------- debugLogs end: cilium-625000 [took: 2.178903375s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-625000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-625000
--- SKIP: TestNetworkPlugins/group/cilium (2.28s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-006000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-006000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard